id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
226298360
pes2o/s2orc
v3-fos-license
A Comparative Study on the Cost and Release Time of Software Development Model Based on Lindley-Type Distribution In this study, the software development cost model with Lindley-Type (Basic-Lindley and Modified-Lindley) distribution property was compared with the Goel-Okumoto basic model, and the attributes of software development cost and optimal release time were newly analyzed. For this purpose, software failure time data were used, parameter estimation was applied to the maximum likelihood estimation method, and nonlinear equations were calculated using the bisection method. As a result of comparing Lindley-Type models, we confirmed that the Modified-Lindley model is more efficient because it has lower software development costs and faster software release time than the Basic-Lindley model. Also, it was confirmed that the Goel-Okumoto basic model is relatively efficient than the Lindley-Type model. Through this study, we newly analyzed the attributes of the Lindley-Type software development cost model which has no previous study case. Also, it is expected that software developers will be able to use this study data as a basic guideline for exploring the attributes of economical software development cost and optimal release time. INTRODUCTION Software technology, which is the core of the 4th industrial revolution era, has converged rapidly in various industrial fields, and the necessity for software development that can process a large amount of data accurately without failure is also increasing.In this software development process, if the economic software development cost and the optimal release time can be predicted, the efficient development process can be performed.For this reason, researches on software development costs have been actively conducted along with software reliability issues.In this process, software reliability models and software development cost models using the Non-Homogeneous Poisson Process (NHPP) have been proposed [1].In particular, concerning the NHPP reliability model, Goel and Okumoto [2] proposed an exponential software reliability model, Shyur [3] proposed a generalized reliability model using change-point.Pham and Zhang [4] proposed a new model based on the NHPP software reliability with testing coverage, and Kim [5] analyzed the cost of the software development model based on the Burr-Hatke exponential distribution.Also, Yang [6] proposed the analysis and prediction methods of the software reliability model based on the NHPP software reliability model with Logistic Distribution Property.Therefore, in this study, after the software development cost model with Lindley-Type (Basic-Lindley and Modified Lindley) lifetime distribution property is comparing with the Goel-Okumoto basic model, we analyze the attributes of development cost and release time through the research results, and will present the information on economic development cost and optimal release time. Goel-Okumoto Basic Model The Goel-Okumoto model is a well-known basic model in the software reliability field.This model assumes the exponential distribution as the lifetime distribution per fault.Let f(t) and F(t) for the Goel-Okumoto basic model be a probability density function and a cumulative density function, respectively.Assuming that the expected value of the defect that can be observed up to the observation point [0, t] is θ, the strength function λ(t) and the mean value function m(t) are as follows. Note that t ∈ [0, ∞] and b > 0 are the shape parameter. The likelihood function of the finite-fault NHPP model using Eq. 1 and Eq. 2 is derived as follows.Here, the finite failure means that no new defect occurs during the repair period. Therefore, the maximum likelihood estimator ̂ and ̂ satisfying the following Eq. 5 and Eq. 6 can be estimated by a numerical method. Basic-Lindley Distribution NHPP Model The Basic-Lindley life distribution is widely known as a suitable model for lifetime test and stress-strength reliability.The Basic-Lindley distribution is a mixture type of exponential distributions and gamma distributions [7].Let f(t) and F(t) for the Basic-Lindley model be a probability density function and a cumulative density function, respectively.The probability density function and the cumulative density function are as follows. Note that t ∈ [0, ∞] and b > 0 are the shape parameter. Assuming that the expected value of the defect that can be observed up to the observation point [0, t] is θ, the finite fault strength function λ(t) and the mean value function m(t) are as follows. The likelihood function of the finite-fault NHPP model by using Eq. 9 and Eq. 10 is derived as follows. The log-likelihood function of the finite-fault NHPP model by using Eq.11 is derived as follows. Therefore, the maximum likelihood estimator ̂ and ̂ satisfying the following Eq.13 and Eq. 14 can be estimated by a numerical method. Modified-Lindley Distribution NHPP Model Shanker, R [8] proposed a Modified-Lindley model that modified the Basic-Lindley model.Let f(t) and F(t) for the Modified-Lindley model be a probability density function and a cumulative density function, respectively.The probability density function and the cumulative density function are as follows. Note that t ∈ [0, ∞] and b > 0 are the shape parameter. Assuming that the expected value of the defect that can be observed up to the observation point [0, t] is θ, the finite fault strength function λ(t) and the mean value function m(t) are as follows. The likelihood function of the finite-fault NHPP model by using Eq. 17 and Eq.18 is derived as follows. After solving the log-likelihood function by using the Eq. 19, the maximum likelihood estimator ̂ and ̂ satisfying the following Eq.20 and Eq.21 can be estimated by a numerical method. Software Development Cost Model The estimated total cost of development based on the software development cost model is as follows [9]. Note that is the estimated total cost of software development. ① 1 stands for initial software development costs, and is considered a constant.② 2 is the testing cost per unit time, and the actual cost per unit time is different for each industry section. 2 is expressed by the following Eq.23. Note that 2 is the testing cost per unit time, t is the testing time point.③ 3 represents the cost of removing a defect by detecting a basic defect and is expressed by the following Eq.24. Note that 3 is the cost of removing one defect found in the testing process, and the mean value function () is the expected value of the defect that can be detected at time t.④ 4 represents the cost of eliminating all remaining defects in the software operating system and is expressed by the following Eq.25. Note that 4 is the defect correction cost discovered by the operator at the software operation stage after the software is released, and ′ is the time when the software can be operated normally after releasing the software. In reality, 4 has a higher cost than 2 and 3 .Therefore, this study is applied to the real situation that 4 is higher than 2 and 3 . Therefore, the optimal software release time for the software development cost can be derived as follows. In other words, it can be seen that the optimal release time is the time point at which the lowest development cost is satisfied. THE PROPOSED ALGORITHM AND SOLUTIONS In this study, the proposed algorithm for analyzing and predicting software development costs is as follows. Software development cost analysis algorithm: Step 1: Validating the software failure data collected through the Laplace trend test analysis. Step 2: Calculating the parameters ( ̂ , ̂ ) for the proposed model using the Maximum Likelihood Estimation. Step 3: Analyzing the results of changing the costs( 3, 4 ) that make up the total cost of software development. Step 4: Determining the best efficient model that meets both optimal software development cost and release time. Step 5: Providing the analysis information about software development cost and release time with reliability verification. Let compare and analyze the attributes of the proposed development cost models using the software failure time data [10] as shown in Table 1.This software failure is the data that was occurred 30 times in 187.35 unit time.Laplace trend test was used to verify the reliability of the software failure time data as shown in Fig 1 [11]. Fig 1. Estimation results of Laplace trend test In general, if the Laplace factor estimates are distributed between -2 and 2, the data are reliable because the extreme values do not exist and are stable. As a result of this test in Figure 1, the estimated value of the Laplace factor was distributed between 0 and 2, as shown in Figure 1.Therefore, it is possible to apply this data because there is no extreme value.In this study, the Maximum Likelihood Estimation (MLE) was used to perform parameter estimation [12].And numerical conversion data (Failure time[hours] × 10 −1 ) to facilitate the parameter estimation was used.The calculation method of the nonlinear equations is solved using the bisection method, and the results are shown in Table 2.In this study, we assumed the cost of software development as [Supposition 1] ~ [Supposition 4] to simulate the same as the actual software development environment.To do this, we will analyze and predict software development cost and release time by changing each component( 3 , 4, ′) that constitutes the total software development cost [13]. [Supposition 1: Basic conditions] The result of the cost curve using [Supposition 1] is as shown in Figure 2. In this figure, the transition of the development cost curve shows a constant pattern for a short period after showing a decreasing pattern in the initial stage, but it shows an increasing pattern as the release time passes.The reason is that in the process of eliminating defects during the initial stage, the development cost is decreased because the number of defects inherent in the software is reduced.But the development cost is increased because the probability of finding the remaining defects during the latter stage is gradually lowered.As a result, the pattern of the development cost curve gradually increases as the release time passes.In other words, the development cost has increased, but the release time has not changed.Therefore, in this case, as many defects as possible should be removed at once so that the cost of removing one defect in the software testing step is not increased.Also, the Modified-Lindley model is relatively superior to the Basic-Lindley model because of its lower development cost and faster release time.As shown in Figure 4, it can be seen that the release time is delayed with increasing development costs.Therefore, in this case, we must eliminate all possible defects at the testing stage rather than the operational stage to reduce all defects before releasing the software.Also, the Goel-Okumoto basic model is the best efficient because it has lower development costs and faster release time than the Lindley-Type model.If comparing the Lindley-Type models by the same method, we can see that the Modified-Lindley model is relatively superior to the Basic-Lindley model. In conclusion, we can predict the optimal software release time together with the development cost trend. [Supposition 4: Assumed that the time ′ is increased in supposition 1] [Supposition 4] is a case where the time( ′ ) that the software can be operated normally after releasing the software is doubled compared to [Supposition 1].As shown in Figure 5, the development cost has increased, but the release time has not changed.Also, in this figure, before the optimal release time, as the time of ′ increases, the cost increases, but after the optimal release time for all models, the cost is almost the same.Therefore, we can analyze the optimal software release time together with the development cost trend, and also, software operators are considered to be helpful to predict the software development costs and the release time [14]. CONCLUSION If software development costs can be quantitatively modeled together with release time during the software development process, the attributes of development costs can be efficiently analyzed and predicted.Therefore, this study analyzes and predicts software development cost together with optimal software release time through the NHPP reliability models based on Lindley-Type distribution property. The results of this study can be summarized as follows. First, under the given basic conditions (Supposition 1), the software development cost curve shows a constant pattern for a short time after a significant decrease in the initial stage but shows a pattern of increasing again in the latter stage as the release time passes. Second, before software release, if the cost ( 3 ) of removing one defect found in the testing process increases, the development cost has increased as well, but the release time has not changed.However, after the software release, if the defect correction cost ( 4 ) discovered by the software operator increases, the development cost increases, and the release time is also delayed. Third, before the optimal release time, as the time ( ′ ) when the software can be operated normally after releasing the software system increases, the cost increases, but after the optimal release time, the cost is almost the same for all models. As a result of a comprehensive analysis of the proposed model used in this study, the Goel-Okumoto basic model is relatively efficient because it has low software development cost and fast release time compared to the Lindley-Type model.Also, when comparing the Lindley-Type model, it can be seen that the Modified-Lindley model is relatively superior to the Basic-Lindley model. Using the results of this study, it is possible to provide software developers and operators with the necessary prior information for predicting the most economical software development costs and the optimal release time.Also, further studies will be needed to find out the optimal software development cost model through analysis with other models having the same type of failure time data distribution. Fig 4 . Fig 4. The development cost curve applied to the condition of [Supposition 3] Fig 5 . Fig 5.The development cost curve applied to the condition of [Supposition 4] Table 1 . Software Failure Time Data
2020-10-29T09:08:54.175Z
2020-09-30T00:00:00.000
{ "year": 2020, "sha1": "ea23eb2be79fcdf2dd636abaa5fd0322bfdaf68f", "oa_license": null, "oa_url": "https://doi.org/10.37624/ijert/13.9.2020.2185-2190", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4de5e3bafa3ff2162dfd8020f11449ef8cfb9a7e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
237280210
pes2o/s2orc
v3-fos-license
Genetically Predicted Glucose-Dependent Insulinotropic Polypeptide (GIP) Levels and Cardiovascular Disease Risk Are Driven by Distinct Causal Variants in the GIPR Region There is considerable interest in GIPR agonism to enhance the insulinotropic and extrapancreatic effects of GIP, thereby improving glycemic and weight control in type 2 diabetes (T2D) and obesity. Recent genetic epidemiological evidence has implicated higher GIPR-mediated GIP levels in raising coronary artery disease (CAD) risk, a potential safety concern for GIPR agonism. We therefore aimed to quantitatively assess whether the association between higher GIPR-mediated fasting GIP levels and CAD risk is mediated via GIPR or is instead the result of linkage disequilibrium (LD) confounding between variants at the GIPR locus. Using Bayesian multitrait colocalization, we identified a GIPR missense variant, rs1800437 (G allele; E354), as the putatively causal variant shared among fasting GIP levels, glycemic traits, and adiposity-related traits (posterior probability for colocalization [PPcoloc] > 0.97; PP explained by the candidate variant [PPexplained] = 1) that was independent from a cluster of CAD and lipid traits driven by a known missense variant in APOE (rs7412; distance to E354 ∼770 Kb; R2 with E354 = 0.004; PPcoloc > 0.99; PPexplained = 1). Further, conditioning the association between E354 and CAD on the residual LD with rs7412, we observed slight attenuation in association, but it remained significant (odds ratio [OR] per copy of E354 after adjustment 1.03; 95% CI 1.02, 1.04; P = 0.003). Instead, E354’s association with CAD was completely attenuated when conditioning on an additional established CAD signal, rs1964272 (R2 with E354 = 0.27), an intronic variant in SNRPD2 (OR for E354 after adjustment for rs1964272: 1.01; 95% CI 0.99, 1.03; P = 0.06). We demonstrate that associations with GIP and anthropometric and glycemic traits are driven by genetic signals distinct from those driving CAD and lipid traits in the GIPR region and that higher E354-mediated fasting GIP levels are not associated with CAD risk. These findings provide evidence that the inclusion of GIPR agonism in dual GIPR/GLP1R agonists could potentiate the protective effect of GLP-1 agonists on diabetes without undue CAD risk, an aspect that has yet to be assessed in clinical trials. Little direct preclinical experimental evidence exists for GIPR agonism contributing to cardiovascular disease (CVD) risk (15,16). GIP exhibits antiatherogenic effects on vascular endothelial cells (17)(18)(19)(20) with the exception that it has been reported to stimulate expression of osteopontin in the vasculature in an endothelin-1-dependent manner (21). Additionally, GIP exerts anti-inflammatory effects on monocytes/macrophages (17,22). These in vitro findings are reflected by cardioprotective GIP pharmacology in mouse models of atherosclerosis irrespective of their diabetes condition (17,22,23). Further, GIP infusion or overexpression is protective in mouse models of restenosis and cardiac remodeling (17,24). While germline or cardiomyocyte-selective knockout of GIPR protected against ischemic injury, GIP itself was not deleterious (25). Further, cardiac selective knockout of the GIPR was not protective in experimental models of heart failure (25). In contrast with these preclinical experimental findings, recent evidence suggests that fasting GIP levels are associated with increased carotid intimal thickening (26). In addition, evidence from a recent meta-analysis (27) of two large population-based cohort studies suggests that higher fasting but not postchallenge GIP levels were associated with increased risk of CVD mortality (hazard ratio 1.30; 95% CI 1.11, 1.52; P 5 0.001). GLP-1 was not associated with CVD mortality, consistent with clinical trial data (28)(29)(30)(31) and genetic evidence (32) highlighting the beneficial effects of GLP1R agonism. Genetic evidence from two-sample Mendelian randomization (2SMR) has reinforced suggestions that higher GIP levels raise CVD risk (27). A missense variant in GIPR, rs1800437 (E354Q), encoding a substitution of glutamic acid for glutamine at position 354 of the GIPR protein, was used as an instrumental variable for fasting GIP levels (27). The 354Q allele has been reported to reduce GIPR signaling by increasing receptor desensitization and downregulation (33). This variant has previously been associated with higher 2-h glucose (34), BMI (35), and fasting and 2-h GIP levels (36). In line with a predicted causal direction from fasting GIP levels to coronary artery disease (CAD) risk, estimates in the reverse direction showed no significant effect of CAD on fasting GIP levels (27). These estimates should be interpreted with caution, however, as 1) they represent the association of a single variant with CAD risk and do not model the effects of other variants in the region, which may dampen or modulate this effect, and 2) they do not take into account that the association between E354 and CAD may be entirely synthetic due to linkage disequilibrium (LD) between this variant and the true CAD causal variant. Considering the pharmacological interest in modulating this pathway as a potential T2D therapeutic, increases in CVD risk would represent a major concern regarding the safety and continued development of these therapies. We aimed to quantitatively assess whether the association between higher GIPR-mediated fasting GIP levels and CAD risk is mediated via GIPR or the result of LD between variants in GIPR and other variants in the region. Using 2SMR, we aimed to quantify the association of higher fasting GIP levels with CAD and other metabolically relevant traits, including $6,000 omics biomarkers, using E354 as an instrumental variable. Next, using Bayesian colocalization, we aimed to partition the traits associated with E354 into distinct clusters driven by shared independent variants. Finally, using conditional analysis we aimed to assess whether any of these associations are confounded by LD between E354 and other variants in the GIPR region. Study Design Three sets of genetic analyses were used to investigate the relationship between higher GIPR-mediated fasting GIP levels and CVD risk. Firstly, using univariate 2SMR, we explored the association of higher fasting GIP levels with CAD and 23 different cardiometabolic diseases, along with anthropometric, glycemic, and lipid traits and $6,000 omics biomarkers from both in-house and publicly available data, with E354 as a proxy (Supplementary Table 1). Next, Bayesian multitrait colocalization was used to partition the traits associated with E354 into distinct clusters driven by shared causal variants. Finally, conditional analyses were used to assess whether any of the associations with E354 are confounded by LD between E354 and other variants in the GIPR region, implying that their associations are mediated via not GIPR but, rather, other genes in the region. Study Participants European Prospective Investigation into Cancer and Nutrition (EPIC)-Norfolk (37) (Supplementary Table 2) is a population-based prospective cohort of individuals aged between 40 and 79 years living in Norfolk (a county of the U.K.) at the time of recruitment from primary care outpatient clinics in the city of Norwich and surrounding areas. EPIC-Norfolk (37) consists of two subcohorts, a T2D case-cohort and a quasi-random selection of participants from the larger EPIC (38,39) Table 2) is a population-based cohort study of individuals recruited from 22 rural and urban recruitment centers in the U.K. European ancestry participants with available genome-wide genotyping and phenotypic data were included in this study. Ethics approval for the UK Biobank study was given by the North West -Haydock Research Ethics Committee (16/NW/0274). This research was conducted using application 44448. Participants gave electronic consent for use of their anonymized data and samples for health-related research, to be recontacted for further substudies, and for access to their health-related records. Genotyping and Imputation Genome-wide genotyping in the Fenland cohort was performed in three subcohorts with use of the Affymetrix Genome-Wide Human SNP Array 5.0, the Affymetrix UK Biobank Axiom Array, or the Illumina CoreExome-24 v1 BeadChip, with imputation to the Haplotype Reference Consortium v1.1 (42), the 1000 Genomes Project (43), and the UK10K (44) reference panels. Samples from EPIC-Norfolk and UK Biobank were genotyped with the Affymetrix UK Biobank Axiom Array and imputed to the same reference panels. Profiling of the Plasma Proteome Fasting EDTA plasma samples from 12,084 participants from the Fenland (40) study were subjected to proteomic profiling by SomaLogic (Boulder, CO) using an aptamerbased technology (somascan v4). The relative abundances of 4,775 human proteins were measured using 4,979 SOMAmers (45). For accounting for within-run hybridization variability, control probes were used to generate a scaling factor for each sample. Differences in total signal between samples as a result of variation in overall protein concentration or technical variability such as reagent concentration, pipetting, or assay timing were accounted for using the ratio between each SOMAmer measured value and a reference value. The median of these ratios was computed for each dilution set (40%, 1%, and 0.005%) and applied to each dilution set. Samples were removed if they failed SomaLogic quality control measures or did not meet the acceptance criteria of between 0.25 and 4.00 for all scaling factors. A total of 10,078 samples had available genotype data and were used in this study. Aptamer target annotations and mapping to UniProt accession numbers as well as gene identifiers were provided by SomaLogic. Plasma Metabolomic Profiling Within EPIC-Norfolk (described previously) (37), the levels of up to 1,504 metabolites were measured in three batches using the Metabolon DiscoveryHD4 platform (46) (Metabolon, Durham, NC), in citrate plasma samples collected at baseline. Measurements were made in $12,000 samples, in two sets of $6,000 quasi-randomly selected samples, which were preceded by measurements in an incident T2D case-cohort (N 5 1,503; 857 in the subcohort). Briefly, raw data were extracted and peaks were identified and assessed for quality by Metabolon. Metabolite identification was done by comparing measures with a curated library containing the retention time, mass-tocharge ratio, and chromatographic data of known metabolites. Each metabolite was then quantified with an area under the curve method and the data were normalized to correct for instrument tuning variations across run days. For data normalization for each run day the median value for each metabolite was set to 1, normalizing each measurement proportionately. Metabolite annotations and pathway classifications are as reported by Metabolon. Genome-Wide Association Study of Plasma Proteins and Pairwise Colocalization of GIP Levels With Cardiometabolic Traits Genome-wide association study (GWAS) was performed as described in Supplementary Table 3. Two SOMAmers targeted circulating GIP, namely, 16292-288 and 5755-29. SOMAmer 16292-288 was selected against amino acids 1-93 of the precursor protein (UniProt identifier P09681), whereas 5755-29 targeted amino acids 22-153. SOMAmers are relative measures of GIP abundance; therefore, to ascertain whether the underlying genetics at GIPR were comparable with previous results (36), we performed pairwise genetic colocalization analyses between GIP measures and cardiometabolic traits. T2D, coronary heart disease, BMI, and 2-h glucose adjusted for BMI and LDL were included as cardiometabolic traits of interest (Supplementary Table 1). Summary statistics from a GWAS of 2-h glucose adjusted for BMI in Fenland (Supplementary Table 3) were preferred over those from previous efforts (34), due to denser variant coverage. Using GWAS summary statistics for each trait, the 1-Mb regions either side of E354 (chromosome 19: 45181392-47181392) were extracted. Insertions and deletions as well as any variants with a standard error of 0 were removed. Effect estimates were aligned to the GIP-raising alleles. Pairwise colocalization was conducted using the COLOC (47) R package. Priors, p1 and p2, the prior probabilities that a variant is associated with either trait, were set to 1 Â 10 À4 , and p12, the probability that a single variant is associated with both traits, was set to 1 Â 10 À5 . T2D and coronary heart disease were treated as case-control traits and all other traits as quantitative. Posterior probabilities for colocalization (PP coloc ) were considered significant if they met the following criteria: (H4 1 H3 $0.9 and H4 / H3 $3), where H3 is the PP for two distinct genetic signals and H4 the PP for a shared genetic signal. GWAS of Plasma Metabolites GWAS was performed in two sets, for all metabolites present in at least 100 individuals in both sets. The first set consisted of up to 5,841 individuals from both the subcohort of the T2D case cohort and the first batch of quasirandomly selected samples. The second set consisted of up to 5,698 individuals from the second batch of quasi-randomly selected samples. GWAS was performed as described in Supplementary Table 3. Association Between E354 and Cardiometabolic and Molecular Traits This work leveraged regional GWAS summary statistics from in-house studies and data from published studies in the 1-Mb regions either side of E354. Details on all included phenotypes can be found in Supplementary Table 1. GWAS for phenotypes derived in-house were performed as described in Supplementary Table 3. Only self-reported White European participants were included for all outcomes, except for plasma metabolite measures in EPIC-Norfolk (37), where all participants were included. However, participants in EPIC-Norfolk (37) overwhelmingly self-reported as White European. We performed univariate 2SMR using the Wald ratio method (48) to estimate the potential causal effect of fasting GIP levels on various traits (Supplementary Table 1). Genetically predicted fasting GIP levels were used as the exposure with E354 as the instrumental variable (Human Genome Organisation [HUGO] gene: GIPR; National Center for Biotechnology Information [NCBI] transcript NM_00016 4.4 c.1060G>C; protein change, E354Q; E345 variant is encoded by the G allele). All summary statistics were aligned to the fasting GIP raising allele (G) of E354. Bonferroni-corrected significance thresholds were used to ascertain statistical significance of E354 across all outcomes. Partial Correlations Between X-12283 and Known Metabolites To estimate the metabolite class and putative functional pathway of X-12283, we estimated partial correlations between X-12283 levels and the levels of other metabolites measured in 11,966 participants from EPIC-Norfolk. First, missing metabolite measures were imputed within each measurement set with use of multivariate imputation by chained equations (MICE) (49) with the R package mice v3.6.0. To ensure accurate imputation, we only considered the 883 metabolites with <50% missingness within both measurement sets. Imputation was repeated a total of 20 times, generating 20 sets of fully imputed results. Following imputation, measures were standardized (mean 5 0, SD 5 1). For each imputation, partial correlations between metabolite pairs were calculated with the R package Gene-Net v1.2.14. Partial correlation estimates were transformed with Fisher Z transformation and the R package psych v1.9.12.31, and then pooled across the 20 imputations for each measurement set, with use of Rubin's rules (50). Estimates for the two measurement sets were then meta-analyzed, using a fixed-effects, inverse variance-weighted method in the R package meta v4.12-0, and finally back-transformed to correlation estimates. P values were calculated with the Fisher transformed partial correlations. Partial correlation estimates with absolute values of >0.1 were then used to draw a Gaussian graphical model in Cytoscape v3.2.1. Partial correlations were considered significant at a Bonferroni significance threshold of P # 1.28 Â 10 À7 , accounting for the 389,403 metabolite pairs tested. Multitrait Colocalization Across Cardiometabolic Traits Multitrait colocalization (HyPrColoc) (51) was used at the GIPR locus to 1) identify cardiometabolic traits that share a common causal variant and 2) partition clusters of cardiometabolic traits driven by distinct causal variants. HyPrColoc was run using the default variant-specific prior configuration; priors 1 and 2 were set at 1 Â 10 À4 and 0.02, respectively; and regional and alignment thresholds of 0.5 were used (51). Variants were extracted and excluded from GWAS summary statistics for 26 cardiometabolic traits of interest as in the pairwise colocalizations above, and all variants in perfect LD (R 2 5 1) with E354 were removed. The GIP measures considered were fasting GIP as measured by SOMAmers 16292-288 and 5755-29, as well as fasting and 2-h GIP measures from the Malm€ o Diet and Cancer (MDC) subcohort of Almgren et al. (36) Both the MDC and Prevalence, Prediction and Prevention of diabetes (PPP)-Botnia Study cohorts were genotyped with exome-wide arrays, thereby limiting the number of variants included in the analysis in considering variants present across all traits. MDC measures were preferred to those from either the PPP-Botnia Study subcohort or the meta-analysis of the two subcohorts due to denser variant coverage, despite the PPP-Botnia Study having a larger sample size. The anthropometric traits adjusted and unadjusted for BMI (where applicable) were BMI, waist-to-hip ratio (WHR), and hip and waist circumferences. T2D and CAD were included as disease outcomes. Glycemic measures included nonfasting glucose, HbA 1c , 2-h glucose adjusted for BMI, fasting glucose adjusted for BMI, and fasting insulin adjusted for BMI. GWAS summary statistics from Fenland were used for fasting and 2-h glucose as well as fasting insulin. Finally, lipid traits included LDL, HDL, total cholesterol, triglycerides, lipoprotein A, apolipoprotein (apo)A1, and apoB. To assess sensitivity in the number and size of clusters identified, increasingly stringent prior and threshold configurations were used. Prior 2 values of 0.02, 0.01, and 0.001, and threshold values of 0.5, 0.6, 0.7, 0.8, and 0.9, were considered. T2D and CAD were considered as binary case-control traits, and all others were considered quantitative. To estimate the posterior probability (PP) that the candidate variant is the causal variant (PP causal ), we multiplied the PP coloc by the PP explained by the candidate variant (PP explained ). Trait clusters were reported at the recommended (51) thresholds of prior 2 5 0.02, regional and alignment thresholds 5 0.9. To account for low variant coverage in the MDC cohort, we ran a secondary analysis using the same populations, configuration, and sensitivity assessments as above, while we excluded the GIP traits measured in MDC. Finally, heat maps based on similarity matrices estimating how often trait pairs were clustered together across all algorithm parameter choices were drawn. In addition, regional association plots were drawn for each cluster with the gassocplot R package and LD data from EPIC-Norfolk. All data analysis was performed with R version 3.6.3. Conditional Analysis at the GIPR Locus To determine whether the association between E354 and CAD was due to LD between E354 and other CAD lead variants in the GIPR region, we performed conditional analysis using GCTA (52) v1.93.1. Using full GWAS summary statistics for CAD (53) on chromosome 19, we implemented a stepwise selection to identify independent variants associated with CAD. Selection was performed with a threshold of P < 1 Â 10 À5 , a threshold for collinearity between variants of 0.05, and a minor allele frequency threshold of 1%. An LD reference panel from EPIC-Norfolk was used. The association between E354 and CAD was then conditioned on each independent variant to estimate whether the association was attenuated, implying that the association was due to the residual LD between E354 and an independent variant. This was repeated for all traits associated with E354. If E354 (or a proxy variant in complete LD with E354) was identified as one of the independent variants, conditional analysis was not performed. Following this, regional association plots were generated using LocusZoom v1.2. To determine whether other variants previously found to be associated with fasting GIP levels (36) were associated with CAD, we extracted their estimates from the CAD summary statistics (53). Data and Resource Availability The data sets analyzed during the current study are publicly available, and links are provided in Supplementary (36) are available from the relevant corresponding author upon reasonable request. All data from UK Biobank are available to approved users upon application. No applicable resources were generated or analyzed during the current study. Each copy of E354 was associated with 0.03 SD higher BMI (95% CI 0.03, 0.04; P 5 3 Â 10 À59 ) (Fig. 1B). Similar associations were observed between E354 and higher regional anthropometric measures from bio-impedance data ( Supplementary Fig. 2) as well as hip and waist circumferences and waist-to-hip ratio. In addition, significant associations were found with both higher lean and fat mass from a large GWAS based on bio-impedance data ( Supplementary Fig. 2). Next, we estimated the association of E354 with the fasting levels of 4,979 human proteins from the SomaScan v4 assay. Significant associations with the levels of three proteins were found ( Supplementary Fig. 3), one of these being 0.08 SD higher fasting GIP levels (95% CI 0.05, 0.11; P 5 4 Â 10 À6 ) as measured according to SOMAmer 16292-288. Interestingly, in our analysis we did not find a significant association between the other GIP SOMAmer, 5755-29, and E354. Lower levels of secretoglobin family 3A member 1 (SCGB3A1) and glutaminyl-peptide cyclotransferase-like protein (QPCTL) were also found to be associated with E354. In contrast with a previous report (21), no association between E354 and osteopontin was found. Multitrait Colocalization Across Cardiometabolic Traits at GIPR A total of 418 genetic variants were included in the main analysis, which was limited due to the inclusion of fasting and 2-h GIP measures from MDC (36), whereas 4,996 were included in the secondary analysis À2.15% (95% CI À2.15, À2.14) and À0.07 mmol/mol (95% CI À0.07, À0.06). adj., adjustment. Blank rows for either analysis indicate a cluster not identified in the respective analysis. adj, adjusted for; GIPR, glucose-dependent insulinotropic polypeptide receptor; glucose, nonfasting glucose; NA, not applicable; variants, single nucleotide polymorphisms. *Trait clusters are reported at the recommended thresholds for HyPrColoc: prior 2 = 0.02; regional and alignment thresholds = 0.9. §The LD in R 2 between the candidate variants for the main and secondary analyses, respectively. ( Table 1). Using the recommended prior and threshold configuration, we identified five distinct trait clusters, three of which were shared by both analyses (Table 1). Cluster similarity across all prior and threshold permutations for the two analyses is summarized in heat maps (Fig. 2). Results for all permutations for both analyses can be found in Supplementary Tables 4 and 5 Of the clusters identified, two distinct clusters were of interest. The first, driven by rs7412, a missense variant in the apoE gene (APOE), contained CAD and lipid traitsmany of which are established CVD risk factors. Both PP coloc and PP causal were estimated to be 1 in the two analyses, demonstrating robust evidence for colocalization (Table 1 and Supplementary Fig. 6). This robustness is further emphasized as the same cluster of traits was identified in using more stringent prior configurations (Fig. 2 and Supplementary Tables 4 and 5). A second cluster of GIP, anthropometric, and glycemic traits was driven by rs1800437 (E354) ( Table 1 and Supplementary Fig. 7). The PP coloc for both analyses showed robust evidence for colocalization (main analysis, PP coloc 5 0.97, PP explained 5 1, PP causal 5 0.97; secondary analysis, PP coloc 5 0.91, PP explained 5 0.68, PP causal 5 0.62). A second cluster of BMI and waist circumference driven by E354 was observed in the secondary analysis (Table 1). Sensitivity analyses showed that this split was an artifact of the branch and bound clustering algorithm in HyPrColoc and the single causal variant assumption ( Supplementary Fig. 7). Removal of the clustering algorithm showed that BMI and waist circumference were part of the larger cluster of GIP, anthropometric, and glycemic traits driven by E354 (PP coloc 5 0.95, PP explained 5 1, PP causal 5 0.95). Critically, these results replicate our findings using pairwise-trait colocalization at this locus, showing that fasting GIP levels and CVD risk are driven by independent variants (R 2 between E354 and rs7412 5 0.004) ( Table 1, Supplementary Figs 6-8, and Fig. 2). Additionally, both colocalization analyses demonstrate that the underlying genetics at GIPR are comparable between GIP levels measured by SOMAmer 16292-288 and the ELISA of previous analyses (36). Together these results robustly demonstrate that the GIP-raising and CVD risk-increasing effects at this locus are distinct (Supplementary Tables 4 and 5). The traits of a third cluster, including a mixture of glycemic and anthropometric traits and apoA1 levels, were estimated to colocalize at rs4420638, which was in LD with rs429358 (R 2 5 0.69), a missense variant in APOE identified as the candidate variant in the secondary analysis (R 2 with E354 5 0.001). In the secondary analysis, HDL was also included as part of the cluster. As the secondary analysis included more variants and therefore had greater genomic context, rs429358 is likely to be the candidate variant at which these traits colocalize. The high PP coloc demonstrated robust evidence for colocalization between these traits at rs429358. Finally, a cluster between T2D and T2D adjusted for BMI was identified in the main analysis but was not replicated in the secondary analysis (Table 1). Instead, a cluster between triglycerides and hip circumference adjusted for BMI was identified, driven by an independent variant, rs5117 (R 2 with rs8108269 < 0.001) ( Table 1). This discrepancy is likely to be a result of the number of variants present in the main analysis. Conditional Analysis at the GIPR Locus Our univariate two-sample MR results showed that E354 was associated with a total of 20 traits at a nominal significance threshold (Fig. 1). Independent signal selection showed that E354 or proxy variants in high LD (R 2 > 0.9) with E354 were identified as independent signals for fasting GIP, 2-h glucose, total cholesterol levels, BMI, and X-12283 levels. A total of 24 variants were independently associated with CAD on chromosome 19, four of which were in the 1-Mb regions either side of E354 at the GIPR locus (Table 2). Conditioning the association between E354 and CAD on the residual LD between E354 and rs7412, the variant estimated to drive the cluster with CAD, resulted in a slight attenuation of this association but remained significant (OR per copy of E354 after adjustment 1.03; 95% CI 1.02, 1.04; P 5 0.003). Of the independent variants identified, rs1964272, an intronic variant in small nuclear ribonucleoprotein D2 polypeptide (SNRPD2), was estimated to be in the strongest LD with E354 (R 2 5 0.27) ( Fig. 3 and Supplementary Fig. 9). The association between E354 and CAD risk was attenuated when conditioned on rs1964272 (OR per copy of E354 after adjustment 1.01; 95% CI 0.99, 1.03; P 5 0.06) ( Table 3). In line with this, the association between rs1964272 and CAD risk was attenuated but remained significant with conditioning on E354 (b per copy of rs1964272 after adjustment 0.02; 95% CI 0.01, 0.03; P 5 7 Â 10 À4 ) (Supplementary Table 6). In addition, the association between E354 and small vessel stroke was also attenuated when conditioned on rs1964272 ( Table 3). None of the other loci previously shown to be associated with fasting GIP levels were found to be associated with CAD (Supplementary Table 7). Interestingly, rs1964272 was also associated with levels of QPCTL and SCGB3A1, indicating confounding by LD for the proteomics data as well ( Supplementary Fig. 10). Conditioning the association between E354 and QPCTL levels on rs1964272 attenuated the association to nonsignificance (b QPCTL per copy of E354 after adjustment 0.01; 95% CI À0.02, 0.04; P 5 0.48) ( Table 3). Conditioning the association of E354 with LDL, apoB, and triglycerides on independent variants for each trait showed that these remained statistically significant despite being attenuated ( Table 3), suggesting that E354 may have independent effects on lipid metabolism. DISCUSSION In this study, we applied Bayesian multitrait colocalization and conditional analysis to gain greater understanding of the underlying genetic architecture of CAD and its relation to fasting GIP levels at the GIPR locus. Multitrait colocalization robustly identified a cluster of CAD and lipid traits at APOE that was independent from a cluster of fasting and 2-h GIP, glycemic and anthropometric traits driven by E354. Further, conditional analysis robustly attenuated E354's association with CAD, small vessel stroke, and QPCTL levels with adjustment for rs1964272 in SNRPD2, an established CAD risk locus (53). Together these results show that association signals for CAD at GIPR are not mediated by an independent effect of GIPR variants on CAD risk but are instead the result of LD confounding between E354 and rs1964272. Taken together, these findings highlight the specificity of E354's effects on fasting GIP levels and robustly demonstrate that higher E354-mediated fasting GIP levels are not associated with CVD risk. These results contradict recent genetic evidence linking higher fasting GIP levels with increased CVD risk (21,27), which led to concerns that chronic pharmacological GIPR agonism could have detrimental effects on cardiovascular health (27) and represent safety concerns for pharmacological agonism of this pathway (54). We therefore provide evidence that the inclusion of GIPR agonism in dual GIPR/GLP1R agonists could potentiate the protective effect of GLP-1 agonists on diabetes without undue CVD risk, an aspect not yet assessed in clinical trials. Many studies have shown that GLP1R agonism achieved through chronic pharmacologic therapy, or genetic gain of function, is associated with improved cardiovascular outcomes (28)(29)(30)(31)(32). Hence, the available evidence suggests that dual agonism of these receptors may exploit the metabolically favorable combined pharmacology of these incretins without undue CVD risk. However, this proposition requires formal assessment in clinical trials such as the recently initiated cardiovascular outcomes trial SURPASS-CVOT (cardiovascular outcomes trial) of the GIP/ GLP1R dual agonist tirzepatide (clinical trial reg. no. NCT04255433, ClinicalTrials.gov). This study has potential limitations. Firstly, our analysis focuses on a single locus associated with both fasting GIP levels and CAD. This assumes that the GIPR locus is a suitable proxy for fasting GIP levels within which to partition the associations of these two complex traits. Considering that the association at this locus with 2-h glucose is statistically robust and in line with the established function of GIP, this is a reasonable assumption. In addition, no other locus has been reported to be associated with both fasting GIP and CAD, and examining the association of other variants associated with fasting GIP levels (36) in genes other than GIPR showed no association of any of these variants with CAD. However, this does not preclude the existence of other variants that have not yet been associated with GIP levels and may contribute to CVD risk. Patients with T2D are the target of GIPR/GLP1R agonist treatment. We investigate the genetic association of E354 on CAD using the largest publicly available genome-wide summary statistics (53). Therefore, analyses stratified by T2D status are not possible, since such results were not generated and are, hence, not available. Indeed, pursuing this in individual studies would vastly lower sample sizes and therefore be underpowered to detect whether associations with CAD differ significantly by T2D status. Specifically, to affect our results and conclusions about the E354-CAD association being the result of confounding by LD, the genetic architecture at GIPR would have to differ between European-descent individuals with and without prevalent T2D, such that the residual confounding by LD differs by T2D status. As LD is generally preserved between individuals from the same ethnic group, this is a very unlikely scenario.
2021-08-25T06:16:42.871Z
2021-08-23T00:00:00.000
{ "year": 2021, "sha1": "c4ca27872366cb9976749c473e43eba05c0473a0", "oa_license": null, "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8564402", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "47b1ff1a7fc0ea184fb0a0a526586ca08488a6c4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234778950
pes2o/s2orc
v3-fos-license
Adult outcome of preterm birth: Implications for neurodevelopmental theories of psychosis Pretermbirthisassociatedwithanelevatedriskofdevelopmentalandadultpsychiatricdisorders,includingpsy- chosis.Inthis review, weevaluatethe implicationsof neurodevelopmental, cognitive,motor,andsocialsequelae ofpretermbirthfordevelopingpsychosis,withanemphasisonoutcomesobservedinadulthood.Abnormalbrain developmentprecipitatedbyearlyexposuretotheextra-uterineenvironment,andexacerbatedbyneuroin fl ammation, neonatal brain injury, and genetic vulnerability, can result in alterations of brain structure and function persistingintoadulthood.Thesealterations,includingabnormalregionalbrainvolumesandwhitemattermacro-andmicro-structure,cancriticallyimpairfunctional(e.g.frontoparietalandthalamocortical)networkconnectiv- ity in a manner characteristic of psychotic illness. The resulting executive, social, and motor dysfunctions may constitute the basis for behavioural vulnerability ultimately giving rise to psychotic symptomatology. There are many pathways to psychosis, but elucidating more precisely the mechanisms whereby preterm birth increases risk may shed light on that route consequent upon early neurodevelopmental insult. Introduction Prenatal and perinatal complications have been implicated in the aetiology of psychosis since the 1930s (Rosanoff et al., 1934) but assumed a greater importance with the genesis of the neurodevelopmental hypothesis of schizophrenia Murray and Lewis, 1987;Weinberger, 1987). While the hypothesis has evolved over the years to include the complex interactions of a diverse set of genetic, environmental, and social risk factors occurring throughout development , a recent meta-analysis made it clear that associations between perinatal events and elevated psychosis risk remain important (Davies et al., 2020). Being born preterm (i.e., before 37 completed weeks of gestation), very preterm (<33 weeks), or extremely preterm (<28 weeks) is associated with a heightened risk of adverse neurological and psychological outcomes from infancy to adulthood. This risk can be at least partly attributed to the sudden and premature exposure of the rapidly developing brain to the extrauterine environment during a critical period for synaptic formation, dendritic differentiation, layering of cortical neurons and glial proliferation (Kostović and Jovanov-Milošević, 2006;Kostović and Judaš, 2010). Preterm birth has been associated with severe brain injury (Volpe, 2009), but also with more subtle brain alterations involving white matter microstructure (Kelly et al., 2016), global structural topology , functional connectivity and cortical morphology (Ball et al., 2020). Mirroring this, outcomes of preterm birth range from subclinical psychological characteristics, such as mild inattention, to lifelong neurological disorders, such as cerebral palsy (Fawke, 2007;Johnson and Marlow, 2011). Preterm birth is also a significant risk factor for psychiatric disorders (Johnson and Wolke, 2013;Nosarti et al., 2012;Walshe et al., 2008), though most research in this regard has focused on developmental childhood disorders such as attention-deficit hyperactivity disorder (ADHD) and autism-spectrum disorders (ASD) (Johnson and Marlow, 2011). Prospective research on adult psychiatric outcomes of preterm birth is more recent (Robinson et al., 2020;Taylor, 2017), as the generation of preterm infants who survived thanks to advances in neonatal intensive care practices (e.g. antenatal corticosteroids, surfactant therapy, and high-frequency ventilation (Manley et al., 2015)) has now reached adulthood. Hence, the need to better understand the longterm sequelae of prematurity is ever increasing. Epidemiological studies have demonstrated an association between preterm birth and increased psychosis risk (Mathiasen et al., 2011;Nosarti et al., 2012). However, the mechanism and specificity of this link are less clear. Psychosis typically first occurs in early adulthood, but is often preceded by prodromal signs of functional decline and Schizophrenia Research 247 (2022) [41][42][43][44][45][46][47][48][49][50][51][52][53][54] attenuated psychotic symptoms (Poletti and Raballo, 2020), and is characterised by neural alterations that can be traced back to earlier stages of development (Walker and Bollini, 2002), such as accelerated fronto-temporo grey matter loss and reduced connectivity of large scale brain networks (Nath et al., 2020). There is a partial but notable overlap between the neural, cognitive, and behavioural profiles of adults with psychosis and those born preterm. In this review, we critically evaluate findings relating to adult outcomes of preterm birth which may shed light on the pathways to psychosis in this population. To this end, we draw in particular (but not exclusively) on findings from an unprecedented longitudinal study of individuals born very preterm and admitted to the neonatal unit of University College London Hospital (UCLH) between 1979 and 1985; this was the first cohort in the world to undergo intensive neonatal brain ultrasonography. The infants were then followed up at several time points into adulthood using neurocognitive and behavioural assessments as well as magnetic resonance imaging (MRI). Studies reporting on outcomes of the UCLH cohort in adulthood are listed in Table 1. General overview: preterm birth and psychopathology There has been substantial research on the psychological consequences of preterm birth in childhood and adolescence. Accumulating evidence in preterm-born children has converged on the identification of a so-called "preterm behavioural phenotype", characterised by increases in anxiety, inattention, and social impairments, coupled with executive function deficits (Johnson and Marlow, 2011). This profile appears to extend into adolescence, manifesting in an increased prevalence of diagnosed ADHD, ASD, and affective disorders (Johnson and Wolke, 2013). Preterm adolescents are at a 3-to-4-fold increased risk of being diagnosed with any psychiatric disorder compared to their term-born peers, representing a prevalence of approximately 25% (Burnett et al., 2011;Johnson and Wolke, 2013). Sub-clinically, dimensional measures capture increased liability for experiencing a wide range of symptoms including social (Healy et al., 2013) and emotional difficulties (Indredavik et al., 2005) in adolescence, which may impact on daily function even in the absence of a formal diagnosis. In adulthood, population linkage and meta-analytic studies have found that the risk of psychiatric hospitalisation is significantly related to the degree of prematurity (Lindström et al., 2009;Nosarti et al., 2012), and preterm-born young adults are substantially more likely to be diagnosed with a psychiatric disorder (Burnett et al., 2011;Mathiasen et al., 2011;Nosarti et al., 2012) or be prescribed psychotropic medication (Robinson et al., 2020) than their term-born counterparts. Comparatively few studies have assessed psychopathology dimensionally in preterm adults; however, a meta-analysis of 6 studies investigating self-reported mental health problems in adults born preterm at very low birth weight found significant increases in internalising problems and socially avoidant behaviour compared to term-born adults (Pyhälä et al., 2017). In a recent study assessing psychiatric symptoms dimensionally in the UCLH cohort, we found preterm-born adults aged 30 to have elevated total psychopathology as well as increased positive, negative, and cognitive symptoms compared to term-born controls (Kroll et al., 2018). Despite the many studies retrospectively examining a history of preand perinatal events in schizophrenia, there is very little research linking preterm birth specifically to psychotic disorders and symptoms. While preterm birth is indeed associated with an increased risk of being diagnosed with non-affective psychosis in adult life (Nosarti et al., 2012), this risk does not appear to be specific to psychotic disorder at the population level. Thus, it is possible that preterm birth confers a transdiagnostic biological vulnerability to psychopathology in adulthood, preceded by a more specific behavioural profile in childhood and adolescence. Different aspects of this profile in conjunction with intervening environmental factors may precipitate different trajectories in the expression of psychopathology during the transition to adulthood, leading to increased diagnostic heterogeneity and comorbidities amongst preterm-born adults. Potential risk factors for and precursors of psychosis arising as a result of preterm birth in adulthood will now be discussed. Neurodevelopment Preterm birth is a leading cause of brain injury, mostly due to perinatal hypoxia-ischemia, which can result in damage in particular to the periventricular white matter and basal ganglia (Back, 2015;Logitharajah et al., 2009;Volpe, 2009). However, even in the absence of severe or obvious brain injury, preterm birth can result in subtle changes in neural architecture that are evident throughout development and into adulthood (Nagy et al., 2009;Nosarti et al., 2002;Nosarti et al., 2014), and which have been related to a range of neurocognitive difficulties (Hadaya and Nosarti, 2020;Kanel et al., 2020). There are several overlapping brain alterations associated with preterm birth and with psychotic disorder. One of the most commonly observed abnormalities following preterm birth is ventricular enlargement (often following peri/intraventricular haemorrhage, P/IVH), with a large proportion of preterm infants showing increased ventricle size at termequivalent age (i.e., the age at which they would have been born had they not been premature) (Hart et al., 2008). Ventricular enlargement has also been observed in preterm adolescents and adults (Allin et al., 2011;Cooke and Abernethy, 1999;Hedderich et al., 2020;Nosarti et al., 2002;Stewart et al., 1999). Neurodevelopment of regions adjacent to the lateral ventricles are preferentially disrupted following P/IVH; consequently post-haemorrhagic ventricular enlargement is associated with reduced deep grey matter volumes (Brouwer et al., 2016) and periventricular white matter damage (Larroque et al., 2003) at termequivalent age. Evidence from the UCLH study suggests that these alterations are likely to have long-lasting effects on brain structure and function. Thus, very preterm adults exhibit an association between ventricle size and impaired microstructural integrity of widespread white matter tracts (Allin et al., 2011), and ventricular enlargement resulting from perinatal brain injury is related to abnormal frontal (Kalpakidou et al., 2014) and frontoparietal neural activation during working memory processing. Increased ventricle size is also one of the most replicated neuroimaging findings in chronic schizophrenia (Olabi et al., 2011;van Erp et al., 2016), with additional evidence for ventricular enlargement at the stage of the prodrome (Chung et al., 2017) and first episode of psychosis (Steen et al., 2006;Vita et al., 2006), and even in male neonates at high genetic risk for schizophrenia (Gilmore et al., 2010). It appears to worsen over the course of the illness (Kempton et al., 2010) but at least some of this is related to the effects of antipsychotics . Numerous studies have found associations between ventricular abnormalities and a history of obstetric complications in patients with schizophrenia (Costas-Carrera et al., 2020), with a strong likelihood that complications during delivery interact with genetic risk for psychosis in contributing to these abnormalities (Cannon et al., 1989;Falkai et al., 2003). Ventricular enlargement following preterm birth may therefore represent a marker of increased underlying vulnerability to psychosis. Similarly, white matter abnormalities are common in both preterm and psychosis populations. The most frequent form of brain injury following preterm birth is periventricular leukomalacia (PVL), defined as either focal (necrotic) or diffuse damage to the periventricular white matter. Diffuse PVL is thought to impact on premyelinating oligodendrocytes, which can subsequently result in an impairment of axonal myelination (Iida et al., 1995;Volpe, 2009). Myelination of the brains' connective white matter tracts is known to extend well into adulthood (Miller et al., 2012) and underpins the efficiency of neuronal signal conduction, providing a crucial basis for brain connectivity. Healthy cognitive functioning is dependent on effective communication between Abbreviations: VPT = very preterm birth; FT = full term; WASI = Wechsler Abbreviated Scale of Intelligence; fMRI = functional magnetic resonance imaging; MD = mean diffusivity; FA = fractional anisotropy; BOLD = blood oxygen level dependent; PVH = periventricular haemorrhage; PBI = perinatal brain injury. In very preterm adults enrolled in the UCLH study, we have observed abnormalities both in the microstructure (Allin et al., 2011;Froudist-Walsh et al., 2015;Tseng et al., 2019;Tseng et al., 2017) and volumes Nosarti et al., 2014;Tseng et al., 2017) of white matter structures compared to term-born controls. Overlapping observations have also been made in other adult cohorts, showing microstructural changes particularly in the corpus callosum and cingulum (Eikenes et al., 2011;Meng et al., 2016;Pascoe et al., 2019) but also extending to other core white matter tracts, which is further reflected in reduced global white matter volume in preterm individuals (Soria-Pastor et al., 2008). Several studies in preterm-born adults have shown impaired cognitive performance to be associated with reduced fractional anisotropy (FA) (Allin et al., 2011;Eikenes et al., 2011) or volume of specific white matter tracts, mirroring findings in younger cohorts (Skranes et al., 2007;Vollmer et al., 2017). Interestingly, however, several studies have shown that impaired structural white matter integrity may result in neural compensation at the functional level, allowing for comparable task performance to termborn controls Salvan et al., 2017). This mechanism speaks in favour of adaptive plastic processes which are achieved in the preterm brain in order to compensate for the early insult. Using network analysis, we have found that despite a relative paucity of white matter resources, preterm adults display a stronger "richclub" architecture compared to term-born controls, suggesting that in the face of anatomical constraints, global connectivity is prioritised over other, more peripheral, connections (Karolis et al., 2016). A simulated "lesion" approach further suggested that the basal ganglia in particular played an altered role in supporting global connectivity compared to term-born adults (Karolis et al., 2016). In addition, preterm adults from the same cohort showed compromised functional connectivity between striatal aspects of the salience network and the default mode network (White et al., 2014), suggesting that between-network connectivity is critically impaired in these individuals. Further evidence for an altered role played by the basal ganglia comes from a study showing that structural volumetric abnormalities in the striatum are associated with abnormal functional connectivity in a basal ganglia network in preterm-born adults (Bäuml et al., 2015). These observations raise interesting possibilities with respect to psychosis risk. Schizophrenia has frequently been conceptualised as a "disconnection syndrome" (Friston and Frith, 1995), whereby aberrant structural brain connectivity leads to a disintegration of effective mental functioning. It therefore seems plausible that an impairment in white matter integrity rooted in neurodevelopment may constitute a significant risk factor for psychosis (Bullmore et al., 1997;Kochunov and Hong, 2014). In fact, altered structural connectivity has been detected in infant offspring of women with schizophrenia, suggesting that genetic high risk impacts brain connectivity early on in development (Ahn et al., 2019;Shi et al., 2012). Fronto-striatal connectivity in particular has been implicated in psychosis (Dandash et al., 2017;Robbins, 1990), in line with the notion that psychotic symptoms arise as a result of aberrant integration of top-down cortical and bottom-up subcortical signals, precipitated by aberrant striatal dopamine functioning (Howes and Kapur, 2009). If, in the preterm brain, overall connectivity indeed relies more heavily on basal ganglia connections than normally expected (Bäuml et al., 2015;Karolis et al., 2016), one might hypothesise that any disturbance of the basal ganglia occurring in early adulthood could disproportionately affect global connectivity, potentially mimicking striatal dysregulation typically observed in psychosis (Bullmore et al., 1997). Abnormal striatal dopamine functioning appears to be a key mechanism underlying positive psychotic symptoms (Howes and Murray, 2014). Preterm adults without brain injury were shown to have normal striatal dopamine synthesis capacity, and preterm adults with perinatal brain injury showed reduced dopamine synthesis capacity . This latter contrasts with the increased dopamine synthesis capacity typically observed in psychosis patients (Howes and Murray, 2014); however, another risk factor associated with schizophrenia, notably heavy cannabis use, has also been associated with decreased striatal dopamine (Bloomfield et al., 2014). One possibility is that this may be associated with supersensitivity of the dopamine D2 receptor , and that disruption of normal dopamine signalling either presynptically or postsynaptically may lead to psychosis (Seeman and Seeman, 2014). It has been suggested that glial cell abnormalities are critically involved in white matter pathology and subsequent symptomatology observed in psychosis and schizophrenia (Dietz et al., 2020). There is evidence for impaired glial progenitor cell differentiation in schizophrenia, resulting in delayed maturation of oligodendrocytes and therefore disrupted myelination in early development (Kerns et al., 2010;Windrem et al., 2017). Impaired progenitor cell differentiation may be precipitated by untimely microglial activation during late fetal development, e.g. as a result of maternal infection, and indeed increased risk of schizophrenia is associated with maternal infection in mid-to late pregnancy (Brown et al., 2004), when oligodendrocyte maturation is most sensitive to microglial activation (Chew et al., 2013). In terms of timing, preterm birth is of particular relevance here. Maternal-fetal inflammatory response and concomitant glial activation is indeed strongly implicated in prematurity (Mallard et al., 2019;Supramaniam et al., 2013) and is known to be associated with subsequent neurodevelopmental disorders such as ASD (Bokobza et al., 2019). Recent multimodal research has leveraged advances in imaging genomics to shed further light on glial cell involvement in altered brain development following preterm birth. Work here has demonstrated that the neurodevelopmental effects of prematurity are temporally and spatially coincident with developmental processes involving cortical glial cell populations (Ball et al., 2020), and that the microglial inflammatory response is implicated in structural white matter changes resulting from brain injury following preterm birth (Krishnan et al., 2017a). Thus, it is possible that neuroinflammation related to preterm birth and subsequent glial pathology could contribute to brain alterations characteristic of later psychotic illness. On a macrostructural level, there is also evidence for volumetric grey matter abnormalities in preterm adults, with potential relevance for psychosis risk. Alterations of the grey matter are being increasingly recognised as important contributing factors to neurodevelopmental disorder and other psychopathologies following preterm birth (Fleiss et al., 2020). Young adults of the UCLH study showed reduced grey matter volume (GMV) in widespread regions including frontal, temporal, insular, subcortical, and occipital areas . Less extensive increases in GMV were found in medial and anterior frontal regions. Similar findings were also observed when the preterm individuals were adolescents , suggesting that volumetric differences are not a mere result of developmental delay, with individuals "catching up" by adulthood, but rather constitute permanent structural alterations. Studies in other cohorts confirm this, with reduced regional and global grey matter volumes being observed in both adult (Bäuml et al., 2015;Meng et al., 2016;Pascoe et al., 2019;Shang et al., 2019) and younger (de Kieviet et al., 2012) preterm samples. Perinatal brain injury was furthermore shown to exacerbate the observed structural changes . Abnormal GMV also partially accounted for altered brain activation during a verbal executive function task, once more suggesting processes of functional plasticity compensating for structural deficits . Meta-analyses of structural brain changes in individuals at high clinical risk of psychosis as well as those experiencing a first episode of psychosis (FEP) provide evidence for reduced GMV in frontal, temporal and insular cortices (Fusar-Poli et al., 2012b;Radua et al., 2012). Subcortical and insular GMV reductions are also seen in those at high familial risk of psychosis (Cooper et al., 2014). The reoccurrence of abnormalities of the insula may be of particular interest here, as this is a major hub of localisation of the transient bursting of spontaneous neuronal events that are critical for brain maturation in preterm infants (aged between 32 and 36 postmenstrual weeks) and as development of insular connections is preferentially affected following preterm birth, with alterations modulated by genetic factors (Krishnan et al., 2017b). Furthermore, impaired integrity of the anterior insula appears to represent a transdiagnostically shared neural substrate of mental illness across psychiatric disorders (Goodkind et al., 2015;McTeague et al., 2017). Thus, evidence again points towards an interaction of genetic liability for psychiatric disorder and perinatal factors including preterm birth in contributing to neurodevelopmental alterations that further increase the risk of experiencing psychopathology such as psychosis. Reduced volumes of the hippocampus and parahippocampal gyrus have also been associated with clinical (Fusar-Poli et al., 2012b) and genetic (Boos et al., 2007;Fusar-Poli et al., 2014) high risk for psychosis. Schizophrenic patients who have suffered obstetric complications including prematurity were found to be especially likely to show decreased volume of the hippocampus (Stefanis et al., 1999). Reduced hippocampal volume is apparent in preterm-born infants by term-equivalent age (Ball et al., 2013), though findings at later ages are more mixed (Aanes et al., 2015;Fraello et al., 2011;Nosarti and Froudist-Walsh, 2016;Omizzolo et al., 2013). Of note, the hippocampus is known to follow a dynamic and highly heterogeneous maturational trajectory, with both gain and loss of regional volume observed throughout development (Gogtay et al., 2006). In adulthood, very preterm born UCLH study participants showed hippocampal shape changes consistent with atrophy, despite no overall volume difference compared to term-born adults (Cole et al., 2015). Intriguingly, this longitudinal study found that larger hippocampal volume in adolescence was associated with increased delusional ideation in early adulthood. Though initially counterintuitive, this finding is consistent with observations that individuals with an At Risk Mental State (ARMS) for psychosis show increased hippocampal volume before transitioning to psychosis, contrasting with reduced hippocampal volume in first episode psychosis patients (Buehlmann et al., 2010). Finally, cortical gyrification (i.e., the folding of the cortical surface), which predominantly occurs in fetal life, is delayed in preterm infants (Dubois et al., 2019;Engelhardt et al., 2015). Recent work demonstrated that altered gyrification is also evident in preterm adults, and crucially mediates the effect of prematurity on general IQ reduction (Hedderich et al., 2019). These findings were extended on by work from the UCLH study showing that abnormal gyrification in preterm adults was related not only to lower IQ but also increased psychopathology (Papini et al., 2020). Strikingly, the pattern of abnormal cortical folding overlapped considerably with that observed in adolescents with a diagnosis of schizophrenia (Palaniyappan and Liddle, 2012) with alterations involving the inferior frontal, insular, and superior temporal cortices. Taken together, neurobiological abnormalities following preterm birth are widespread and heterogeneous, and are likely to confer a general vulnerability to psychiatric disorder. However, within this heterogeneity, it is possible that certain neurodevelopmental trajectories are indicative of clinical risk for psychosis more specifically. Moreover, recent evidence supports the notion that genetic vulnerability to psychiatric illness (as measured by polygenic risk scores) interacts with the environmental stress caused by preterm birth to promote neuroanatomical abnormalities of the lentiform nucleus (Cullen et al., 2019) (which together with the caudate forms the striatum); this in turn may further compound the risk of experiencing psychopathology. Therefore, where prematurity coincides with genetic risk for psychosis and results in brain injury, the pre-existing vulnerability is likely to be exacerbated. Ventricular enlargement resulting from perinatal brain injury may be a particular marker of vulnerability to psychosis. In addition, brain tissue abnormalities, especially where they affect frontoinsular-temporal or hippocampal cortex, as well as fronto-striatal connectivity, likely underlie an increased risk for psychosis in preterm individuals. Importantly, many of the brain alterations observed in preterm or psychosis samples also mediate a range of cognitive deficits, which will be discussed in more detail in the following section. Cognitive function Cognitive impairment in psychosis spans multiple domains including executive functioning (Reichenberg and Harvey, 2007), languagerelated abilities (Condray, 2005), and social cognition (covered in more detail in Section 3.4) (Sheffield et al., 2018). Executive dysfunction in particular is a hallmark of schizophrenia, with deficits thought to be underpinned by neural abnormalities notably of the prefrontal and anterior cingulate cortices (Minzenberg et al., 2009). Importantly, cognitive deficits are already evident in childhood in individuals who go on to develop schizophrenic psychosis (Fusar-Poli et al., 2012a;Jones et al., 1994). It has consequently been suggested that cognitive impairment lies, at least in part, on the causal pathway linking genetic or developmental risk factors with the development of psychosis (Reichenberg, 2005), and is therefore an early indicator of psychosis risk. Cognitive development in children and adolescents born preterm has also been studied extensively (Johnson, 2007). A recent metaanalysis of cognitive outcomes concluded that very preterm born children exhibit medium to large deficits in general intelligence, executive functioning, and processing speed (Brydges et al., 2018). Many of these deficits are associated with early regionally specific neuroanatomical changes (Batalle et al., 2018). For example, thalamocortical (Ball et al., 2015) and callosal (Pannek et al., 2020) connectivity assessed at term-equivalent age are predictive of cognitive abilities in toddlers, whereas neonatal microstructural integrity of the arcuate fasciculus predicted language ability at the age of 2 (Salvan et al., 2017). Neonatal volumes of insula and putamen are associated with maths skills aged 5 and 7 (Ullman et al., 2015), and 8-year olds show smaller cortical volumes that are associated with general IQ (Peterson et al., 2000). Several studies have investigated which cognitive deficits persist into adulthood (Breeman et al., 2015;De Jong et al., 2012;Løhaugen et al., 2010); the UCLH study in particular has shed light on the neural correlates of cognition in preterm adults (Allin et al., 2011;Kalpakidou et al., 2014;Kontis et al., 2009;Lawrence et al., 2010;Lawrence et al., 2009;Narberhaus et al., 2009;Nosarti et al., 2007;Nosarti et al., 2006;Nosarti et al., 2009). Preterm born adults tend to score lower on average than term-born controls on IQ tests (Breeman et al., 2015;Kroll et al., 2019;Løhaugen et al., 2010). However, even accounting for these differences in IQ, preterm adults exhibit further cognitive deficits, particularly in executive functioning. In the first broad assessment of executive function in very preterm young adults, our group reported impairments in tasks involving response inhibition and mental flexibility (Nosarti et al., 2007). Impaired executive functioning was furthermore associated with poorer real-life achievements in these adults . When investigating the neural correlates of executive functioning in the UCLH cohort, we also found that in spite of good performance on a response inhibition task, preterm adolescents and adults compared to controls displayed altered task-related haemodynamic responses, which may indicate alternative processing strategies in these individuals Nosarti et al., 2006). This is underlined by findings from an independent cohort that preterm adults show a haemodynamic response characteristic of predominantly reactive-rather than proactive cognitive control, suggesting alternative neural strategies despite similar task performance . Similar observations were made with respect to verbal fluency (Kalpakidou et al., 2014;Nosarti et al., 2009) and verbal and visual associative learning (Lawrence et al., 2010;Narberhaus et al., 2009) tasks, during which preterm adults showed altered brain activation in the absence of performance differences. These findings in adults are in line with a considerable body of work in preterm children demonstrating engagement of alternative neural circuits, especially in the context of language processing (Barde et al., 2012;Barnes-Davis et al., 2018;Lubsen et al., 2011;Ment et al., 2006). Overall, while cognitive performance in pretermborn adults compared to term-born controls depends on the specific task domain (and difficulty) under study, performance does appear to be associated with neuroanatomical alterations associated with preterm birth. For example, deficits in global executive functioning in preterm adults were found to be mediated by reduced temporal grey matter and callosal white matter volumes . Microstructural changes of the corpus callosum in preterm adults are also associated with lower IQ, verbal learning, and memory performance (Allin et al., 2011;Kontis et al., 2009). Taken together, these findings suggest that neurodevelopmental abnormalities associated with preterm birth necessitate a functional reorganisation of executive processes, but nevertheless can result in persisting cognitive deficits. Some of these deficits and their associated neural changes overlap with those seen in psychosis: for example, impaired inhibitory control is associated with similar patterns of increased midline and attenuated fronto-parieto-cerebellar activation in psychosis (Minzenberg et al., 2009;Vercammen et al., 2012), and abnormal microstructural integrity of the corpus callosum is related to poor executive function in patients with schizophrenia (Ohoshi et al., 2019). It is possible that executive function deficits following preterm birth are mediated by attentional difficulties, with inattention constituting the core behavioural difficulty in preterm individuals Elgen et al., 2002). This notion provides a compelling basis for a potential link between the cognitive profile of prematurely born individuals and elevated psychosis risk. Attentional impairments are not only pervasive in patients with psychotic disorder (Hoonakker et al., 2017;Luck and Gold, 2008), but may also constitute a neurodevelopmental marker of vulnerability to psychosis prior to illness onset (Seidman et al., 2016). Children with ADHD are at higher risk of developing schizophrenia in adulthood (Dalsgaard et al., 2014) and the severity of childhood ADHD symptoms retrospectively assessed in first-episode schizophrenia patients is associated with obstetric complications, delay of milestone attainment, and earlier onset of psychotic symptoms . Intriguingly, much of the altered neural activation observed in preterm adults during executive task performance occurs within a fronto-parietal network thought to underpin attention allocation Nosarti et al., 2006). Corresponding with this, patients with psychosis exhibit altered activation and connectivity of the attentional fronto-parietal network not just during executive task processing (Godwin et al., 2017;Roiser et al., 2013), but even at rest (Chang et al., 2014;Tu et al., 2013). Attentional impairment is suggested to serve as a particularly useful endophenotypic marker of familial risk for psychosis (Cornblatt and Malhotra, 2001), predicting over half of individuals at high-risk for psychosis who develop schizophrenia (Erlenmeyer-Kimling et al., 2000). However, even in the absence of genetic risk for psychosis, it is possible that neurodevelopmental disruptions to networks underpinning attentional processes could have wide-reaching cognitive consequences increasing vulnerability to psychosis. More generally, it has been suggested that early neurodevelopmental insults might deplete cognitive reserves, resulting in a cascade of cognitive impairments that manifest increasingly throughout development as environmental demands grow, and culminate in psychosis (Mollon and Reichenberg, 2018). In summary, there is overlap both in the cognitive domains and associated neural correlates implicated in psychosis and those affected by prematurity, notably executive functioning and attention. While not all studies show differences in cognitive performance between preterm and term-born adults, there is substantial evidence that cognitive abilities are associated with neural abnormalities in those born preterm, both during development (Ball et al., 2015;Pannek et al., 2020;Peterson et al., 2000;Salvan et al., 2017;Ullman et al., 2015) and in adulthood (Allin et al., 2011;Lawrence et al., 2014;Nam et al., 2015;Nosarti et al., 2014;Olsen et al., 2018). Given these findings, which suggest that alterations in neural systems underpinning cognitive functioning persist well into adulthood of preterm individuals, it is likely that these disruptions constitute significant vulnerability factors that could further interact with genetic or environmental risk for psychotic disorder. Motor functioning Even in the absence of obvious disability, neurological impairments are known to be more common in preterm-born individuals than in their term-born peers (Fawke, 2007). Delays in motor development are a key characteristic of preterm infants, with evidence that motor impairments can persist into adulthood (Allin et al., 2006b;de Kieviet et al., 2009). Young adults of the UCLH study underwent comprehensive neurological assessment aged 18 (Allin et al., 2006b), with preterm-born individuals exhibiting increased motor confusion, alongside impaired sensory integration, compared to term-born controls. These integrative neurological abnormalities were furthermore associated with reduced general intelligence. In a further study, adults from the same cohort performed a movement generation task while undergoing functional imaging (Lawrence et al., 2014). Although many preterm and term-born participants performed near ceiling at this simple task and behavioural differences between groups were not detected, preterm individuals showed increased activation in a cerebellar-cortical network relative to controls. Similarly to several cognitive studies in this cohort, these findings imply recruitment of additional neural resources in order to perform the motor task at comparable levels to term-born subjects. This hyperactivation was correlated with structural grey matter deficits in right premotor cortex (Lawrence et al., 2014), lending further support to the notion of a neural compensatory strategy. Research in other preterm cohorts also confirm that motor impairments following prematurity are not outgrown by adulthood. Extremely low birth weight survivors showed impaired motor coordination compared to controls from the age of 8 to 36, suggesting that the impairment is stable throughout development (Poole et al., 2015). Both fine and gross motor skills remained impaired from age 14 to 23 in preterm individuals born at very low birth weight (VLBW) (Husby et al., 2013), and these impairments were associated with microstructural abnormalities of interhemispheric (corpus callosum) and motor (corticospinal tract) pathways . Motor coordination problems in adults born preterm have also been shown to be associated with reduced cortical surface area (Sripada et al., 2015). Crucially, poorer motor skills are associated with increased levels of psychiatric symptoms and lower quality of life ratings in preterm VLBW adults (Husby et al., 2016), leading to suggestions that manifestations of motor impairments may become more evident as the challenges related to the transition into adulthood increase. Psychosis is known to be associated with an excess of neurological soft signs including impairments in motor coordination, sequencing, and sensory integration (Dazzan and Murray, 2002). Given the preponderance of motor deficits in first-degree relatives of individuals with schizophrenia, developmental motor symptomsespecially impaired coordinationare thought to represent an endophenotype of schizophrenia indicating disease risk (Burton et al., 2016). Motor deficits in schizophrenia are typically associated with negative and cognitive symptoms (Bombin et al., 2005), but in individuals at high risk for psychosis, motor dysfunction was also highly related to premorbid positive symptomatology and was furthermore indicative of transition to psychosis (Masucci et al., 2018). Indeed, gross motor skills have been shown to have unusually high sensitivity (75%) in predicting schizophrenia-related psychoses in offspring of patients with schizophrenia (Erlenmeyer-Kimling et al., 2000). Motor neurological soft signs in psychosis are associated with structural alterations in a cerebello-thalamo-prefrontal network (Mouchet-Mages et al., 2011) which is also implicated in "cognitive dysmetria" (i.e., disruption in the fluid coordination of mental activity) in schizophrenia (Andreasen et al., 1999;Andreasen et al., 1996). Cerebellar and thalamocortical connectivity are known to be affected by prematurity (Ball et al., 2013;Herzmann et al., 2019), likely playing a role in the emergence of motor dysfunction in preterm infants (Hoon Jr et al., 2009;Messerschmidt et al., 2008). Furthermore structural alterations of the basal ganglia, which are at particular risk of neonatal brain injury following preterm birth (Logitharajah et al., 2009) are associated with motor abnormalities in first episode psychosis patients (Cuesta et al., 2020). These findings suggest that preterm birth may cause disruption to the neural circuitry crucially involved in early neurological signs associated with psychosis. Early motor difficulties could lead to impaired self-other distinction and development of the embodied self, which are implicated in the genesis of psychotic symptoms by virtue of a failure to distinguish between internally and externally generated sensation (Poletti et al., 2019). Recent evidence of impaired body representation and deficits in sensorimotor representation of self and other generated action in preterm children (Butti et al., 2020;Montirosso et al., 2019) provides a possible mechanistic link between abnormal motor development following preterm birth and vulnerability to psychosis. This is further underlined by evidence that neuromotor abnormalities are significantly associated with obstetric complications in high risk (Marcus et al., 1993) and psychotic individuals. Social functioning Social difficulties and deficits in social cognition (referring to the mental operations underlying social behaviour) are a key characteristic of the preterm behavioural phenotype (Johnson and Marlow, 2011). Preterm-born children and adolescents are at increased risk of developing autism-spectrum disorders (ASD), whereby the symptomatic presentation is more strongly characterised by social communication problems than repetitive or stereotyped behaviour (Indredavik et al., 2005). Both alterations in brain development (Fischi-Gómez et al., 2015;Healy et al., 2013) and exposure to socio-environmental risk factors (Montagna and Nosarti, 2016) are implicated in adverse childhood social outcomes following preterm birth, and general neurocognitive impairments in functions such as attention and memory are likely to contribute to social cognitive deficits in these individuals (Dean et al., 2021). Socio-emotional problems following preterm birth can already be observed in early childhood, and these problems have been associated with structural and functional brain alterations. For example, disrupted orbitofrontal white matter integrity at term-equivalent age is predictive of socio-emotional difficulties at age 5 (Rogers et al., 2012), and in school-aged children born preterm poorer performance on social tasks is associated with hypoactivation of relevant frontoparietal circuits (Mossad et al., 2017;Urbain et al., 2019). In contrast to childhood and adolescence, comparatively little research has focused on social cognition in adults born preterm, but evidence suggests that social difficulties persist into adulthood, with those born preterm reporting poorer social life and fewer social interactions (Kajantie et al., 2008;Lund et al., 2012;Saigal, 2014). Personality assessments in preterm adults also showed that they are characterised by a distinct socially withdrawn personality type , lower extraversion and higher neuroticism (Allin et al., 2006a). Psychotic disorders, too, are characterised by profound problems with social interactions and social cognitive deficits. These difficulties are likely rooted in early development, with children who later develop schizophrenia showing greater social maladjustment than healthy controls (Done et al., 1994). In fact, the magnitude of social cognitive deficits individuals at high clinical risk for psychosis substantially exceeds deficits in other cognitive domains (Fusar-Poli et al., 2012a). Similarly to other cognitive deficits, attention is suggested to play an important role in the development of social difficulties in psychosis (Cornblatt and Malhotra, 2001), whereby attentional problems may cause an inefficiency to process information from the (social) environment, resulting in a disruption of social competence. This is underscored by observations that attention deficits tend to be more highly correlated with social compared to psychotic symptoms (Green, 1996). Good social functioning likely acts as a protective factor against psychopathology (McLaughlin et al., 2020; Selten and Cantor-Graae, 2005), thus social difficulties arising as a result of preterm birth may transdiagnostically increase the risk for developing psychiatric disorder in later life. With particular relevance for psychosis, social stress, disadvantage, or isolation (encapsulated in the concept of "social defeat") has furthermore been suggested to cause sensitisation and/or overactivity of the mesolimbic dopamine system, thus precipitating the emergence of psychotic symptoms more specifically (Gevonden et al., 2014;Selten and Cantor-Graae, 2005). Conversely, the dysfunction of striatal dopamine which we found in adults with perinatal brain injury from the UCHL cohort may underpin vulnerability to psychosocial stress (Pyhälä et al., 2009). In summary, psychosocial stress associated with poor interpersonal skills following preterm birth is highly likely to increase general vulnerability to mental health problems, although the specificity of this for psychosis is less clear. However, social impairments following preterm birth may also be a marker of disrupted development of neural systems not only underpinning social but also general cognitive functioning (such as attentional processes) (Montagna and Nosarti, 2016). Key regions of the social brain are also relevant for cognitive processes implicated in psychosis development, such as temporoparietal junction (Gromann et al., 2013) and medial prefrontal cortex (Ilzarbe et al., 2019). Thus, abnormalities in these regions associated with prematurity may manifest socially during earlier development, but contribute to elevated psychosis risk in the transition to adulthood. Summary and outlook We have considered the implications of the neurodevelopmental, cognitive, motor, and social sequelae of preterm birth for developing vulnerability to psychosis, with an emphasis on outcomes observed in adulthood. There is remarkably little research explicitly investigating the occurrence of psychotic symptoms following preterm birth. This may be a consequence of the difficulties caused by the long interval between preterm birth and onset of psychosis, but it is also a reflection of a lack of specificity in the link between the two. Indeed research indicates that preterm birth results in a transdiagnostically increased risk for psychiatric disorder in adulthood at the population level (Nosarti et al., 2012;Walshe et al., 2008). However, at the individual level, it is nevertheless important to consider the mechanisms by which adverse outcomes of preterm birth may increase vulnerability specifically to psychosis. Outcomes of preterm birth are highly heterogeneous, placing those exposed onto one of a wide array of possible trajectories. Similarly, psychotic disorders are highly heterogeneous both in presentation and in origin. The aim of this review was to tease out the pathways that could link prematurity to psychosis risk within this diverseness. Key potential mechanisms and markers identified here are depicted in Fig. 1. Certain aspects of the developmental preterm behavioural phenotype are particularly relevant for potential psychosis risk. Specifically, deficits in attention, social difficulties, and motor impairments, all of which are increased following preterm birth, are known to be predictive of psychosis. Each of these factors has separately been identified as endophenotypes signalling familial risk for psychosis (Burton et al., 2016;Cornblatt and Malhotra, 2001;Tikka et al., 2020). As such, they could be considered simple epiphenomena of underlying genetic vulnerability; however, it is more likely that they lie on the causal pathway towards developing psychotic symptoms and therefore pose risk even in people with relatively low genetic vulnerability. Attentional deficits likely induce greater social and cognitive impairment engendering abnormal information processing; social difficulties increase the likelihood of experiencing psychosocial stress and isolation resulting in social defeat; and motor difficulties could lead to impaired self-other distinction and development of the embodied self; all of which represent outcomes known to be strongly associated with psychotic experiences. Each of these factors warrant more detailed investigations in adult preterm cohorts in conjunction with assessments of psychosisproneness. The behavioural vulnerability factors described here are underpinned by neural abnormalities that show overlapping structure in preterm and psychosis patient samples. Neural changes observed in psychosis have long been considered to be at least partly neurodevelopmental in nature , and many of these alterations could be caused by insults to the brain and/or subsequent altered neurodevelopment as a result of preterm birth, including those resulting from neuroinflammatory glial response. Of course, individuals subject to preterm birth suffer very different impacts on the neonatal brain. Those candidate impacts most likely to increase risk of psychosis include white matter injury resulting in impaired microstructural integrity of fronto-striatal and interhemispheric tracts (Allin et al., 2011;Dandash et al., 2017;Karolis et al., 2016;Ohoshi et al., 2019), reduced grey matter volume in basal-ganglia, fronto-insular, and temporal regions (Cuesta et al., 2020;Fusar-Poli et al., 2012b;Logitharajah et al., 2009;Nosarti et al., 2014), and functional dysconnectivity in frontoparietal and cerebello-cortical circuits (Ball et al., 2013;Mouchet-Mages et al., 2011;Narberhaus et al., 2009;Roiser et al., 2013). Brain alterations resulting from preterm birth may manifest in symptoms such as inattention or autism early on, but then change in the transition to adulthood as the demands of the surrounding world increase, ultimately leading to psychosis. Equally, individuals could appear behaviourally unaffected for the most part (as seen in the frequently normal cognitive task performance in preterm and termborn adults), but the neural compensatory changes necessary for this intact behaviour could signal vulnerability that renders the system less resilient to later stressors and risk factors. Importantly, substantial evidence suggests that there is at least an additive effect between genetic risk for psychosis and obstetric complications (Cannon et al., 1989;Cullen et al., 2019;Falkai et al., 2003), whereby neonatal brain injury as a result of preterm birth likely increases an already existing risk. In addition, social factors of the caregiving environment such as parenting and the wider community may play an important role. Preterm birth is known to cause significant parental stress and overprotectiveness, which can negatively impact cognitive and behavioural development (Kleine et al., 2020;Treyvaud, 2014), thus representing a further risk factor for increased psychopathology. In addition, it is important to note that causes of preterm birth itself are heterogeneousfor example, preterm deliveries may be medically indicated due to pre-eclampsia or fetal growth restriction, or occur spontaneously due to maternal infection, inflammation or vascular disease (Goldenberg et al., 2008) and these different aetiologies could themselves constitute independent risk factors for subsequent development of the offspring. For example, preterm birth resulting from chronic maternal stress and mediated by fetal infectious-inflammatory processes (Wadhwa et al., 2001) likely poses a greater risk for subsequent psychosis-proneness (due to the established link between inflammation and psychosis, as discussed above) compared to indicated preterm birth due to pre-eclampsia (Davies et al., 2020). Although neonatal morbidity (Bastek et al., 2010) and mortality (Delorme et al., 2016) are known to depend on the subtype of preterm birth, the impact of different causes of preterm birth on long-term outcomes is an area which remains largely unexplored (Crump, 2020). However, the possibility of a genetic confounding of the association between prematurity and psychopathology must be considered, given that women with mental illness are at increased risk of preterm birth (Baer et al., 2016), especially spontaneous preterm birth (Sanchez et al., 2013;Venkatesh et al., 2019). That said, a recent study found little evidence for an association between maternal polygenic risk for schizophrenia and preterm delivery (Leppert et al., 2019), although psychosocial and lifestyle factors associated with psychiatric disorder could underlie the increased risk of giving birth prematurely (Behrman and Butler, 2007). Taken together, it must be acknowledged that an association between preterm birth and psychosis risk could in some cases be at least partly attributed to common underlying risk factors for both, including genetic susceptibility. Moreover, prematurely born infants are subject to a wide range of different postnatal experiences associated with varying degrees of neonatal illness (and treatment thereof) in neonatal intensive care, such as exposure to pain, need for mechanical ventilation, surgery, or anaesthesia. While it is difficult to separate out potential effects of intensive care strategies from effects of morbidity on later outcomes (Marlow, 2014), many of the factors characterising early hospital course are important risk factors for poor neurodevelopmental outcomes (Kocek et al., 2016;Smith et al., 2011;Xiong et al., 2012). Gaining a better understanding the long-term consequences of neonatal intensive care remains challenging and will be important in order to elucidate potential links between prematurity and psychiatric outcomes, including psychosis. L.D. Vanes, R.M. Murray and C. Nosarti Schizophrenia Research 247 (2022) 41-54 It is important to remember that a range of factors can increase risk of psychosis and that the pathways between these different factors and psychotic symptoms may be quite different. For example, already it is clear that those individuals who develop psychosis following heavy cannabis use show relatively little in the way of neurodevelopmental or social difficulties in childhood. Future research can further improve our understanding of psychosis by taking into account this heterogeneity. Risk related to preterm birth should be studied firstly by more explicitly assessing the prevalence of psychotic symptoms dimensionally in large developmental and adult preterm cohorts. Here, a better understanding of the long-term outcomes following varying subtypes of preterm birth with different underlying causes will substantially improve further risk stratification. In addition, testing whether symptoms of psychosis are more strongly related to brain structure and function in preterm compared to term-born individuals will shed light on specific pathways to psychosis related to neurodevelopmental disruption. Finally, it will be crucial to investigate the predictive validity of developmental symptoms in conjunction with neural abnormalities for transition to psychosis specifically in preterm cohorts. Overall, this will improve clinicians' ability to anticipate the type of psychopathology preterm-born individuals are most likely to suffer from long-term, and thereby provide more tailored care on an individual level.
2021-05-20T06:16:18.745Z
2021-05-15T00:00:00.000
{ "year": 2021, "sha1": "e1f3ab60e19500e51bf6daeeb8e5c304aaed80cd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.schres.2021.04.007", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "f98aa4cfa4d1622d40a1e3f7e1f9a38080ecc3c2", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
3256367
pes2o/s2orc
v3-fos-license
Histological and Demographic Characteristics of the Distribution of Brain and Central Nervous System Tumors' Sizes: Results from SEER Registries Using Statistical Methods. The examination of brain tumor growth and its variability among cancer patients is an important aspect of epidemiologic and medical data. Several studies for tumors of brain interpreted descriptive data, in this study we perform inference in the extent possible, suggesting possible explanations for the differentiation in the survival rates apparent in the epidemiologic data. Population based information from nine registries in the USA are classified with respect to age, gender, race and tumor histology to study tumor size variation. The Weibull and Dagum distributions are fitted to the highly skewed tumor sizes distributions, the parametric analysis of the tumor sizes showed significant differentiation between sexes, increased skewness for both the male and female populations, as well as decreased kurtosis for the black female population. The effect of population characteristics on the distribution of tumor sizes is estimated by quantile regression model and then compared with the ordinary least squares results. The higher quantiles of the distribution of tumor sizes for whites are significantly higher than those of other races. Our model predicted that the effect of age in the lower quantiles of the tumor sizes distribution is negative given the variables race and sex. We apply probability and regression models to explore the effects of demographic and histology types and observe significant racial and gender differences in the form of the distributions. Efforts are made to link tumor size data with available survival rates in relation to other prognostic variables. INTRODUCTION The importance of epidemiologic and medical data, and inference on the extent possible for primary brain tumors and tumors of the Central Nervous System (CNS) has been previously recognized (1,2). Relevant rates and prognostic information come primarily from clinical trials and popu-ORIGINAL ARTICLE lation registry data. Clinical trials usually provide more complete information on prognostic factors, since one pathology has been reviewed as a whole. On the other hand, estimates based on population registry data are reflecting a bigger picture of patients but with considerably larger variance for the times and types of diagnoses. Many studies for tumors of the brain and CNS have examined and interpreted descriptive and epidemiologic data suggesting possible explanations for the changes in the disease rates. Brain tumor growth and its differentiations account for some of the variability in the survival rates emphasizing the importance of tumor size in the prognosis of patients with brain tumor (3,4). The statistics reflected in this study represent a significant portion of the US population and the data is maintained with high standards for different geographical and ethnic populations, supporting our central goal to report what may possibly make the brain and CNS an undesirable environment for tumor progression. For this study, we select the size of the tumor to be the variable of interest for different types of tumor and demographic characteristics. The distribution of tumor types with age is reported by the Central Brain Tumor Registry of the United States (CBTRUS). Higher incidence rates in males than in females for most histologies have been reported in (5), as well as racial differences in occurrence rates. Trends in incidence and survival in the United States have been studied in (6), increased risk of brain cancer was associated with being male, Caucausian, elderly. In (7) and (8) cancer survival trends are evaluated and changes in the survival rates suggest possible explanations for the improved prognosis. Finally, prevalence rates for the US population are studied in (9). We aim to present our findings with regard to diagnosed brain tumors in the USA from 1973 up to 2006. The National Cancer Institute's Surveillance, Epidemiology and End Results (SEER) program provided us with information for malignant brain tumors. To better understand the relationship between demographic characteristics and the brain tumor prognosis we identified the probability distributions that best describe the variability of the tumor size records for different races and sexes. Such a characterization is essential in order to obtain information about the central tendency, variance and skewness of distributions specific to race and sex and postulated about their effect on the tumor size. Furthermore, in an attempt to better understand the role of age on the tumor size, we study its effect on the distribution of tumor sizes in the presence of two other predictors, namely race and gender and consid-ered all possible interactions as potential prognostic factors for the tumors' variation. METHODS The discussion below presents the data from several cancer registries in the United States of America, as well as the complete compilation of various brain tumor types included in the study. Finally, we tabulate the distribution of tumor types with respect to age groups, race and gender. selection and Description of Participants The Surveillance, Epidemiology and End Results program is a comprehensive source of population based information on cancer registries covering 28% of the population in the USA collecting complete and accurate data on all cancers diagnosed. SEER periodically report incidence, mortality and survival data as well as the extend of disease at diagnosis and link those with other national data sources to identify unusual changes and differences in the patterns. Specific goals of SEER include the facilitation of collaboration among the scientific community and the encouragement in use of surveillance data from researchers, public health officials, policy makers and the public for cancer prevention, monitoring and control interventions. In this study we analyze data on malignant primary brain tumors diagnosed from 1973 through 2006 which are available for the following registries: Atlanta, Connecticut, Detroit, Hawaii, Iowa, New Mexico, San Francisco-Oakland, Seattle-Puget Sound, and Utah, Los Angeles, San Jose-Monterey, Rural Georgia, and the Alaska Native Tumor Registry, Greater California, Kentucky, Louisiana, and New Jersey. The data set includes 72,770 primary malignant brain tumor cases: 37,150 males and 35,620 females. The information for the histology of the brain tumors provided by SEER includes definitions for cancer morphology and topography and is based in the ICD-0-3 histology (10). Primary site codes are C000-C809 and ICD-O-3 histology codes are 9161-9571 as well as 8000-8005 finally, 0, 1 and 3 are ICD-O-3 behavior recodes. Tumors vary greatly in size and positions, the shape is also inevitably subjective and becomes infeasible in large datasets. SEER records for primary tumors were measured as the largest dimension or diameter of the tumor in mm's. The research data we are interested included 11,331 male (87.6% white, 5.8% black, 6.6% other races) and 11,027 female (84.7% white, 7.5% black, 7.8% other races) individuals with recorded tumor sizes. technical Information The classification of tumors for adolescent and young adults (AYA) was developed to better understand major cancer sites and facilitate the reporting of cancer incidence rates and trends. The histological site groups for the tumors that are used in the SEER have been based on the classification scheme proposed in (11) for cancer morphology and topography; six main diagnostic groups are defined, half of them have subgroups of two or three members. It is mentioned in the paper cited before that "Included in the considerations at that time were the desirability of having a standard framework while allowing for the flexibility of subdivisions within a small number of main groups, and the allocation of the maximum number of codes to specific categories so that the number of malignancies grouped as "other" is minimized." The groups are further delineated for more detailed analysis of the information in terms of specific histological subgroups. Astrocytoma, is subdivided in specified low grade astrocytic tumor, glioblastoma and anaplastic astrocytomas, astrocytoma non-otherwise specified (NOS). Other glioma, ependymoma, medulloblastoma and other primitive neuroectodermal tumors (PNET) subdivided in medulloblastoma, supratentorial PNET. Finally, other specified intracranial and intraspinal neoplasms, unspecified intracranial and intraspinal neoplasms subdivided in unspecified malignant intracranial and intra spinal neoplasm and unspecified benign/boarder intracranial and intraspinal neoplasms, the classification scheme is presented in Figure 1. Descriptive information for the total number of tumors by race, sex, histology is presented in Table 2, the numbers shown at the body of the table refer to percentages of the diagnosed population out of the total numbers found on the right column. Cases are racially classified as white or nonwhite. The most frequently reported histologies are Glioblastoma and anaplastic astrocytoma (32.5%), Unspecified malignant intracranial intraspinal neoplasms (16.2%) and other specified intracranial and intraspinal neoplasms (13.6%) which account for over one half of the reported tumors. A large number of astrocytomas (21.8%) is classified as non-otherwise specified. Age distributions differ by histology type suggesting different etiologic factors are active. Medulloblastic tumors are more prevalent in children, 69.7% of medulloblastoma patients are diagnosed before the age of 20 and more common among males (12). Finally, ependymoma is more frequent in females than males, for a comprehensive analysis of rates and their time trends see (13). For the parametric analysis of the tumor sizes, the probability distributions that best fit the tumor sizes data are the Dagum or inverse Burr, widely used to describe the distribution of personal income. The Dagum distribution is closely related to the generalized beta II distribution and can be parameterized either with three or four parameters (14) as shown in equation (a). Also, the Weibull probability distribution frequently used in reliability engineering (15), characterizes tumor sizes for females in most of the cases, the analytical representation is shown in equation (b). The probability distributions are selected here using random CNS samples of 5000 from the data at hand; the Kolmogorov Smirnov goodness of fit test applied for the identified distributions. In the case of the Dagum four parameter distribution α, k are shape parameters, β is the scale parameter finally, γ is the location parameter. When γ=0 resulting in the (0, ∞) domain for the probability distribution we refer to the three parameters Dagum distribution. For the Weibull distribution α, β are the shape and scale parameters respectively. The product α•k for the paramemeters of the Dagum distribution measures the rate of increase from zero for x→0, or the probability mass of the tail (14) and is larger in the male than the female in the only relevant case, resulting in a greater probability mass in the case of females for the left tail. We list the maximum likelihood estimates of the identified distributional parameters in Table 1. RESULTS In the following subsections we summarize brain tumor sizes by sex, age, race and gender for patients with different tumor types. Significantly different mean tumor sizes for histology specific tumors may have important implications. Finally, we report the differences in the probability distributions of the tumor sizes fitted in population subgroups. Comparisons of Mean Tumor sizes Age specific average tumor sizes by sex are plotted to present the variability for various ages. For specific histol-ogies and age groups we compute the mean tumor size and statistically compare gender specific averages. Taking into consideration the complexity of the data we performed all pairwise comparisons non-parametrically, not relying in any particular distribution. In the subsequent analysis we have disregarded tumors with sizes more than 105 mm, those tumors were deemed to challenge the robustness of our analysis. A standardized system for the analysis and presentation of data regarding tumors diagnosed in a specific age group for the given classification will greatly facilitate comparisons of interest and generate interesting hypotheses. To this end we plot the annual mean tumor sizes for males and females and comment on the behavior for different data driven age groups in Figure 2. The different age categories Evidently, average sizes of tumors diagnosed before forty years of age exhibit greater variability than those tumors diagnosed later and this behavior is common in both sexes. The data is less volatile for tumor sizes diagnosed in individuals between the ages of forty and seventy, though the tumor sizes are consistently larger for males. For the ages less than forty there is a distinction in the behavior between different sexes, the male records exhibit a concave up behavior with a peak at the age of twenty, while the female records do not present any clear pattern. Lastly, for the ages above forty male tumor sizes do not show any clear pattern while female tumors clearly present a downward trend. We note that tumor sizes for the ages above seventy-five were computed with less data compared to other ages. Considering the importance of studying mean tumor sizes we further delineate the data by histology and gender. The vast majority of people in the data set were white (87.3%) whereas diagnosed tumors in blacks comprised 7.1% of the total number of patients. About 9.4% of the tumors occurred in children (0-19 years old), whereas 46.3% of them were in the 20 to 64 age group, with the remaining 43.9% occurring in the elderly (65 years or older). The 36 histology specific sample means for three age groups and sexes along with the corresponding sample sizes are tabulated in Table 1. For the six histologies combined, the mean tumor size for males is larger than that for females (p-value<0.00005). We tested the equality of mean tumor sizes betweeen males and females for specific histologies; all comparisons were made using the Kruskal-Wallis test. Only in the case of medulloblastoma we failed to reject the null hypothesis (p-value=0.42), the mean tumor size of medulloblastic tumors does not differ. When testing the same hypothesis for the mean tumor size in the three different racial groups we failed to reject in all cases. The tabulated average tumor sizes followed by total number of patients in parentheses, refer to patients diagnosed with malignant brain tumors between 1969 and 2006. Parametric analysis The main advantage of parametric methodology is that the information contained in the very large data sets can be concentrated in a small number of parameters. Further- The sums of the subgroups listed do not equal the total. Tumor types included in the total may not be in a specific subgroup of SEER classification. more, useful information can be drawn directly from the estimated parameters for different subpopulations, provided that the differences in the estimated parameters have a clear biological interpretation. Therefore, one of the undisputed properties required for the probabilistic models of distributions of tumor sizes is their biological interpretation. To formulate the analysis we have partitioned the data with regard to race into whites, blacks and a third class containing the remaining races as well as gender of the patient. In Figure 3, we plot below the identified probability density curves that best characterize the tumor sizes for six racial/gender subpopulations. The basic characteristics of the tumor sizes for those subgroups are reported in Table 4, we can initially note a clear distinction between the distributions of male and female tumor sizes, the modes of the respective distributions of tumor sizes are around 40 mm and 27 mm. In further detail, the probability distribution for the black females is more skewed compared to any other of the identified distributions. Having knowledge of the probability distributions we can have a better understanding of the tumor sizes variability and estimate basic measures for the tumor records in different population subgroups. In the following we tabulate the means, medians, variances, skewnesses measure of symmetry, and kurtoses, measure of flatness. We are using the estimated values of the appropriate distributional parameters to compare the basic characteristics of the tumor sizes for the populations of interest (the mathematical formulas used to calculate the statistics of Table 3 are presented in Appendix A). From Table 4, the difference between the mean and median in the black populations is about double the corresponding difference in the other populations. Also, skewness for both the black male and black female populations is larger than other subgroups, that is, on average there is a higher number of tumor sizes more distant from the median. Although as we have already mentioned, the black comprise only 7.8% of the individuals in the data set, there is strong evidence that the variance is largest in the black subpopulation. Measuring the size of kurtosis observed in the data sets, we find out that the value for the black male individuals is three times as big as the second largest. Higher kurtosis means more of the variance is a result of infrequent extreme deviations from the mean as opposed to frequent modestly sized deviations. It is worth noting that, on the To our knowledge there has been no systematic analysis of the brain tumor sizes and differences associated with the effect of the race and gender on the respective distributions (the Surveillance Epidemiology and End Results organization does not analyze data on tumor size because of the large amount of missing data). Such measurements though are commonly used in the evaluation of diagnosed tumors and have implications for patient prognosis and treatment. In Table 5 we report probabilities of different tumor sizes for male and female patients based on the parameter estimates of the fitted distributions. The 5-year relative survival rates related to the tumor size characterization can be found (16) on the last two columns. It is reported in (16) that for diagnosed brain tumor sizes ≤20 mm, 20-50 mm, >50 mm the 5-year relative survival rates are 31.5, 19.8, 20.8, respectively and 89.2, 71.2, 58.3, respectively for tumors of the other central nervous system. The 5 year relative survival rates are not significantly different for patients with tumor sizes between 20-50 mm and bigger than 50 mm. Survival differences in subpopulations can be associated to the differences in the distributions of tumor sizes. In the following section we will attempt to model the effect of those covariates on the quantiles of the distribution of tumor sizes. Quantile regression Model To be able to measure the effect of population characteristics on the tumor size in even more detail we employ regression models for different quantiles of the distribution of tumor sizes. We begin this section with a brief introduction to the model, and then immediately apply it to our dataset. Standard least squares regression models calculate the average effect of the independent variables on the tumor size. However (17), the focus on the average tumor size may hide important elements of the underlying relationship. There is extensive literature mainly in economics, microeconomics and econometrics (18,19) are notable publications, as well as several other areas including wealth inequality, food expenditures, school quality issues and demand analysis when a more comprehensive picture of the effect of the predictors on the response variable. Quantile regression models the relation between a set of predictor variables and specific percentiles (or quantiles) of the response variable by specifying changes in the quantiles of the response. For example, a median regression of tumor size diagnosed on brain tumor patients spec-ifies the changes in the median tumor size as a function of the predictors. The effect of gender on the median tumor size can then be compared to the effect on other quantiles of the tumor sizes distribution and a more complete picture of covariate effects can be provided. Even more, in linear regression the regression coefficient represents the change in the response variable produced by a one unit change in the predictor variable associated with that coefficient. The quantile regression parameter estimates the change in a specified quantile of the response variable produced by a one unit change in the predictor variable. This allows comparing how some percentiles of the tumor size may be more affected by certain characteristics than other percentiles. At rameters. Our analysis includes tumor sizes measured in mm for 22,140 individuals discussed in the result Section. Figure 4 presents a summary of quantile regression results for the data. In each plot, the regression coefficients for 19 different quantiles indicate the effect on the size of the tumor with a unit change in that variable, assuming that the other variables are fixed, with 95% confidence interval bands. For example, in the first panel of the picture, the intercept can be interpreted as the estimated conditional quantile function of the tumor size distribution of a female infant whose race is "other". The estimated coefficients for the quantiles of the tumor sizes for the whites which are plotted in the top right graph show a negative effect on the tumor growth compared to "other race". Regarding the effect in the median and the higher quantiles we did not see any difference with the coefficient in the linear regression case. In the bottom left graph the estimates of the quantiles regression for the blacks are not significantly different from zero, even though the effect on the 90th quantile is about 14 times bigger than that on the average tumor size. In the last graph, in accordance to our OLS findings male tumor sizes are larger than females, the largest difference between the quantiles for males and females is 9.93mm at the 95% quantile (95% CI is between 3.39 and 16.47). In Table 6 selected results from the analysis of 10 th , 25 th , 50 th , 75 th , and 90 th quantiles are shown including coefficients for interaction terms between race, gender and age are tabulated to identify statistical significance with respect to the quantiles of distribution the tumor sizes, while the complete tabulation of the coefficients along with information on the statistical significance can be found on Appendix B. Age and its interaction with race appears to a statistically significant factor that affects the change in tumor size. Estimated parameter values for the variable age provide evidence that the marginal change is considerably different between mean lower and upper quantiles of tumor sizes. Specifically, estimated coefficients predict that the changes in the 5 th , 10 th , 15 th , 20 th , 25 th , 30 th , 35 th quantiles of the distribution of tumor sizes are -0.06, -0.09, 0.09, -0.10, -0.12, -0.08 and -0.09 respectively for each additional year of age when the variables race and sex remain fixed were statistically significant at the 95% level. The effect of age on the conditional distribution of tumor sizes was not significant for the higher quantiles, the ordinary least squares calculation show a marginal decrease of 0.05 for each additional year. With regard to the interaction between age and race, for the 5 th , 15 th , 20 th quantiles of the distribution of the tumor sizes of the white race the effect is positive and proportional to the coefficients for a marginal change in the age while this reverses for the higher quantiles as shown in the table in the Appendix B. We also identified significant interaction between gender and age for the lower and middle quantles, the control variable being female as shown in Table 4, uniform across the quantiles and approximately equal to the estimated effect of the ordinary regression case. Several of the covariates are of substantial public health and policy interest, the interpretation of their causal effects may be controversial, especially in the case of the gender and race covariates. In almost all the panels of Figure 4, with the exception of the coefficients for whites for the 85 th quantile, the quantile regression estimates do not lie at any point outside the confidence interval for the ordinary least squares regression, formal hypothesis testing is discussed in (20), suggesting that the effects of these DISCUSSION Tumor size is known to be of great prognostic importance, independent of other prognostic variables, the purpose of this research is to conjecture on the importance of race, sex and age covariates on brain tumor size. Efforts were made to link tumor size data from registries to demographic patient information to help researchers postulate histology specific etiologic risk factors. We have applied probability and regression models to explore their impact on the size of a brain tumor. The vast majority of patients in the data set were white, the average tumor sizes diagnosed between twenty and forty years of age exhibited the largest variability, this behavior was common in both sexes. The probability distributions fitted for the tumor sizes of both male and female patients were strongly skewed, for the black female population subgroup the least amount of the variance was a result of infrequent extreme deviations from the mean. Finally, age distributions differ by histology type suggesting different etiologic factors are active for different histological types. In addition, we explored the sources of heterogeneity in the brain tumor sizes using quantile regression to identify the effect of homogenous subpopulations associated with disease progression. The lower quantiles of the distribution of white tumor sizes are lower compared with the "other" race, whereas the 80 th and 90 th quantiles are significantly higher. When we compared the tumor sizes of blacks with other race, the estimates of the coefficients for the distribution of tumor sizes for blacks showed that the higher the quantile of the distribution of tumor sizes the bigger the difference with the relative quantile for the other races distribution. The estimated coefficients for age predict that the effect of age in the lower quantiles of the tumor size distribution is negative when the variables race and sex remain fixed. Using the brain tumor registry data provided by SEER we demonstrated differences in tumor sizes for a histology classification. In addition, we estimated the basic characteristics of the distribution of tumor sizes as well as the significant demographical effects on them. While several approaches have been considered in the literature, we found inference with brain tumor sizes a lucid way to discuss the effect of several prognostic factors.
2016-05-04T20:20:58.661Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "da8eb27dada21e90380d974dc0c18a9b11a17270", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "da8eb27dada21e90380d974dc0c18a9b11a17270", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
11437334
pes2o/s2orc
v3-fos-license
Herbal Medicines for Asthmatic Inflammation: From Basic Researches to Clinical Applications Asthma is one of the most common chronic inflammatory disorders, associated with reversible airflow obstruction, airway hyperresponsiveness, and airway remodeling. This disease has a significant impact on individuals, their families, and society. Standardized therapeutics such as inhaled corticosteroid in combination with long acting β2 agonist have been applied for asthma control; however, complementary and alternative medicines, especially herbal medicines, are still widely used all over the world. A growing body of literature suggests that various herbals or related products might be effective in inhibiting asthmatic inflammation. In this review, we summarize recent advances about the mechanistic studies of herbal medicines on allergic airway inflammation in animal models and their potential application into clinic for asthma control. Introduction Asthma is a chronic inflammatory disease characterized by reversible airway obstruction, airway hyperresponsiveness (AHR), infiltration of inflammatory cells, mucus hypersecretion, and airway remodeling [1]. It affects 300 million individuals worldwide, with the prevalence ranging from 1% to 18% of the population in different countries [2]. A variety of immune cells, structure cells in the lung, cytokines, chemokines, adhesion molecules, and signaling pathways contributes to the asthmatic pathogenesis. Although standardized therapeutics such as inhaled corticosteroid (ICS) in combination with long acting 2 agonist (LABA) have been used to control asthma symptoms, complementary and alternative medicine (CAM) is still common all over the world. A survey involved 7685 individuals aged 55 or older with current asthma was performed recently in USA, and it showed that CAM use in the older adult asthmatic population was frequent, with nearly 40% using some type of CAM [3]. Another survey about traditional Chinese medicine (TCM) for pediatric asthma in Taiwan showed that 57.95% ( = 26 585) of the investigated children had used TCM [4]. Based on the facts that TCM or CAM is widely used in asthma control, increasing basic or clinical studies have been conducted to investigate the molecular mechanisms or clinical applications of herbal medicines for asthma therapy. Given the fact that asthma pathogenesis is complex, the roles and effective targets of these herbal products in asthma therapy are also very complicated. In general, basic researches are all trying to separate effective monomer from herbs or herbal formula for asthma study, while most clinical researches still only focus on the efficacy for patients using an intact traditional formula. We will illuminate the major achievements of them, respectively, in this review. Basic Researches Basic researches on herbal asthma therapy can be summarized into nine aspects according to the mechanism summarized below. Those researches that only reported some T helper 1 (Th1) cell or T helper 2 (Th2) cell cytokines, for example, Interleukin-4 (IL-4), IL-5, IL-13, and interferon-(IFN-), altered after herbs intervention without any further 2 Mediators of Inflammation mechanistic studies will not be expatiated in our paper, as they can be affected by many factors. Targeting the Th1/Th2 Imbalance. It is generally accepted that Th1/Th2 imbalance is responsible for the development of allergic asthma. Th1 cells secrete IFN-, IL-12, and tumor necrosis factor-(TNF-), whereas Th2 cells secrete IL-4, IL-5, and IL-13 [5]. IL-4 together with IL-13 causes isotype classswitching of B cells towards Immunoglobulin-E (IgE) synthesis, which can bind to high-affinity receptors on mast cells and basophils, and leads to subsequent activation of these cells. IL-5 activates eosinophils and attracts them to the lung, where they secrete numerous inflammatory cytokines and chemokines. IL-13 also directly affects the airway epithelium, including increases in goblet cell differentiation, activation of fibroblasts, and bronchial hyperresponsiveness [5][6][7]. These cytokines may also in turn affect the Th1/Th2 balance [5,8]. It has been acknowledged that two transcription factors, that is, T-bet and GATA-3, are responsible for Th1/Th2 balance. Further, GATA-3 and T-bet can be influenced by IL-12, IFN-, or IL-4 via the signal transducers and activators of transcription (STAT), that is, STAT4, STAT1, and STAT6, respectively [5]. Extractives from Astragalus, Panax ginseng, Saururus chinensis, Psoralea, and Ligustrazine [9] were reported a similar mechanism of decreasing the ratio of GATA3/T-bet expression level. Typically, Jin et al. [10] investigated the effects of the boiling water extract of Psoralea fructus (PF) and psoralen, an active ingredient of PF, on Th2 clone (D10.G.4.1) cells in vitro and in vivo, and interpreted their effect as suppressing GATA-3 protein expression. Similarly, Chen et al. [11] found that a single compound, Bavachinin, isolated from PF decreased the GATA-3 function by reducing the stability of GATA-3 mRNA and further suggested that Bavachinin may suppress the binding or coactivating function but not expression of pSTAT6. In their further study [12], two new derivatives of Bavachinin with a better water solubility were investigated, and one of these two derivatives not only inhibited GATA-3 mRNA production but also increased T-bet mRNA production. However, clinical research for Psoralea fructus in treating asthma is lacking. Only few case reports [13] involving Psoralea fructus related recipe in Chinese can be found but provided limited evidence. Efficacy on STAT6 was reported in extracts from Scutellaria baicalensis [14] and Cnidii monnieri [15]. Chiu et al. [15] explored the effects of Osthol (Cnidii monnieri fructus extract) on epithelial cells using human bronchial epithelial cells (BEAS-2B) in vitro. Their research demonstrated that Osthol suppressed IL-4-induced eotaxin (a key mediator in allergic diseases with eosinophilic infiltration) in epithelial cells via inhibition of STAT6 expression. Effect on MAPK and NF-B Signaling Pathways. Mitogen-activated protein kinases (MAPKs), which comprise three major subgroups, that is, extracellular signalrelated kinase 1/2 (ERK1/2), p38, and c-Jun N-terminal kinase 1/2 (JNK1/2), play critical roles in the activation of inflammatory cells [24]. Nuclear factor kappa B (NF-B) is an important transcription factor involved in the expression of various proinflammatory genes. Increased activation of NF-B has been observed in the lungs after allergen challenge and in airway epithelial cells and macrophages from asthmatic patients [25]. Many studies have reported that allergic asthma could be improved by regulating the activation of MAPK and NF-B signaling pathways [26,27]. Targeting the Treg/Th17 Cells. T-regulatory cells (Tregs) are a heterogeneous group of cells that play a central role in maintaining the homeostasis of pulmonary immunity by establishing immune tolerance to nonharmful antigens or suppressing effector T cell immunity. The specification of Treg subset is driven by transcription factor forkhead box P3 (Foxp3) [5,[75][76][77][78]. Th17 cells are key players in chronic lung inflammation, including asthma. Steroid-resistant asthma and neutrophil-mediated asthma have been proved to be related to Th17 cells. IL-17 also directly affects the airway smooth muscle by inducing allergen-induced airway hyperresponsiveness [79][80][81][82]. ROR t is found to be the transcription factor related to Th17, which is required to activate IL-17 production in Th17 cells [5]. Increased expressions of IL-17A and IL-17F have been shown in lung tissue of asthma patients [6]. Thus, herbal treatments targeting Foxp3 and ROR t have been revealed gradually. Effect on Lung Dendritic Cells (DCs). Dendritic Cells (DCs) participated not only in the differentiation of T helper cells but also in IL-12 production and CD8+ T cell stimulation via antigen uptake. Two subsets of blood DCs, that is, myeloid and plasmacytoid DCs, were identified based on the expression of CD11c [5]. Most CD11c+ myeloid DCs in the lung are immature, which express relatively low levels of major histocompatibility complex (MHC) class II, and have a high capacity of antigen uptake but poor T cell stimulating activity [5]. Thus, inhibiting functional differentiation of pulmonary immature DC to mature DC may be a strategy to restrict the activation of T cells. Lee et al. [63] discussed the effect of Artemisia iwayomogi polysaccharide-1 (AIP1) on DC functions. They observed significantly reduced levels of MHC II in DCs of the AIP1 treated group, suggesting that AIP1 could reduce the expression of MHC II molecules on pulmonary DC. They also reported that AIP1 diminished the allergenic T cell stimulating ability of DCs derived from bone marrow in another study. These data suggested that AIP1 could inhibit functional differentiation of pulmonary DCs in vivo. Effect on Mast Cell Degranulation. Mast cell degranulation, which can be triggered by antigen-mediated crosslinking of IgE bound to Fc R1 surface receptors or changes in the surrounding local tissue environment, plays an important role in asthmatic response. As a result, many of the mediators that are stored or newly synthesized by the mast cells are released attracting leukocytes (eosinophils, basophils, Th2 lymphocytes, and neutrophils) to the inflammatory site and amplify the inflammatory response [86]. Hence, inhibiting mast cell degranulation will be helpful for asthma treating. We gathered three extracts that associated with this process: Oroxylin A [43], Bakkenolide B [66], and Petatewalide B [67]. The first one is isolated from Scutellaria baicalensis while the next two are from Petasites japonicus. It is worth mentioning that both Bakkenolide B and Petatewalide B do not inhibit antigen-induced Ca2+ increases in mast cells, which suggested that Bakkenolide B/Petatewalide B induced inhibition of degranulation seems not to be mediated via the inhibition of Ca2+ channel or Ca2+ increase in mast cells. As for Oroxylin A, no detailed mechanisms were provided to expand the phenomenon of inhibiting mast cell degranulation. Further mechanistic investigations on these extracts are necessary. Effect on Oxidative Stress. Oxidative stress plays an important role in the pathogenesis of most airway diseases, particularly when inflammation is prominent. Recently, heme oxygenase-1 (HO-1) was shown to be induced in the airways of patients with asthma. As a natural antioxidant defense, HO-1 exerts cytoprotective reactions against oxidative cell injury. Greater HO-1 expression may mitigate asthma symptoms and suppressed IL-13-induced goblet cell hyperplasia and MUC5AC production [87][88][89]. Hence, targeting on HO-1 or its transcription factor nuclear factor E2-related factor 2 (Nrf-2) [90] is a considerable strategy for asthma control. There are also some studies that only showed a reduced level of oxidative stress marker reactive oxygen species (ROS) when extracts like ethanol extracts of Mentha [59] and Petasites japonicus [65] intervened. The mechanisms by which they reduce ROS level need further study. Effect on Relaxing Airway Smooth Muscle. Airway contraction is an important feature of asthma, and recent strategies to relax airway smooth muscle include antihistamine and anticholinergic and 2-adrenoreceptors stimulation and Ca2+ signaling blocking [61]. Mokhtari-Zaer et al. [61] summarized the Crocus sativus's (saffron's) effect on relaxing airway smooth muscle in a review. Aqueous-ethanolic extract of Crocus sativus and safranal were mentioned in their article and showed multiple effects including antihistamine, anticholinergic, and 2adrenoreceptors stimulation according to four published studies. Yang et al. [69] identified that trifolirhizin, a flavonoid compound isolated from Sophora flavescens, was responsible for inhibiting acetylcholine induced airway smooth muscle (ASM) contraction independent of 2-adrenoceptors. Aqueous methanolic extract from Zingiber officinale (ginger) was also reported having effect on acetylcholine induced airway contraction. Ghayur et al. [70] indicated that its effects were associated with Ca2+ signaling, possibly via blocking Ca2+ channels on plasma membrane. Effect on Airway Remodeling. It is believed that airway smooth muscle (ASM) cell proliferation and migration play important roles in airway remodeling. Both platelet-derived growth factor (PDGF) and transforming growth factor-(TGF-) are reported to be related to airway remodeling [91]. Recently, TGF-1/Smad signal pathway was found to be one of the important mechanisms for signal conduction in asthma airway remodeling [92]. Astragaloside IV and Skullcapflavone II, extracts from Astragalus and Scutellaria baicalensis, respectively, were reported to attenuate the allergen-induced airway remodeling in mice, likely through inhibition of TGF-1 [37,41]. Further research was performed by Jang et al. [41] and indicated that Skullcapflavone II elevated Smad7 and suppressed Smad2/3 expression, which was responsible for TGF-1 inhibition. As for Astragalus, our group researched several formulas using Astragalus as key component acting on TGF-1/Smad signal pathway. The one Astragali-Cordyceps Mixtura [50] decreased TGF-1 expression and recovered Smad7 protein expression, and the other one Astragali radix Antiasthmatic Decoction (AAD) [93] was also found to improve the symptoms of allergic airway remodeling through inhibition of Th2 cytokines and TGF-1. In a recent study, we also noticed that Suhuang antitussive capsule, a traditional Chinese medication, significantly attenuated the allergeninduced AHR, inflammation, and remodeling in mice, likely through inhibition of IL-13 and TGF-1 [94]. Effect on the Arachidonic Acid Metabolism Pathway (AAMP). Arachidonic acid (AA) from the diet or after synthesis is stored in membrane phospholipids and is liberated under appropriate stimulatory conditions by the enzyme phospholipase A2 (PLA2). Arachidonic acid is then metabolized by three main classes of enzymes (cyclooxygenases (COX), lipoxygenases (LOX), and p450 epoxygenases) and all products of these three pathways like prostaglandin E2 (PGE2), prostaglandin D2 (PGD2), leukotrienes (LTs), and so forth are related to inflammatory and anaphylactic reaction. To be specific, PGE2 has been thought of as a potent proinflammatory mediator and has many beneficial functions in the lung tissues, such as the inhibition of inflammatory cell recruitment, reduction of leukotrienes and PGD2, and decrease of Th2 differentiation, thereby modulating inflammation and tissue repair. LTs (LTB4, LTC4, LTD4, etc.) are also thought to be important mediators of airway inflammation and airway obstruction in asthma. LTB4 can act as a neutrophil chemoattractant [53,95,96]. Thus, strategies targeting AA metabolism are effective in many inflammatory diseases. Particularly, there are 4 herbs we would like to review individually as follows for their multiple function of antiasthma and popularity in basic researches. Scutellaria baicalensis is a multifunctional traditional herb. Its extracts include Skullcapflavone II, Baicalein, Oroxylin A, wogonin, and Baicalin, which showed different but maybe cooperative functions in treating asthma. All of them have been proved in animal experiment or in vitro but not in human as mentioned before. And their results only showed simplex function on some cytokines or genes related to asthma. Whether they can benefit the whole interaction when asthma occurred is still unclear. For example, Jang et al. 's experiment [41] indicated Skullcapflavone II's function on TGF-1/Smad signaling pathways. They observed a decreased level of TGF-1 in BALF, elevated Smad7 expression, and suppressed Smad2/3. However, as a pleiotropic and multifunctional growth factor, TGF-1 also exerts immunosuppressive effects on asthma progression, and therapies targeting on TGF-1 are still controversial, although it has been expatiated that TGF-1 is responsible for airway remodeling [97]. Another multifunctional herb with relative sufficient studies is Astragalus membranaceus. We have showed above that its extract has been reported acting on GATA3/T-bet and TGF-1/Smad and NF-B signal pathway and Tregs within last few years. Among them, Astragaloside IV should be underlined here for the fact that it nearly performed all effects that Astragalus membranaceus possessed for treating asthma. However, it is a pity that although abundant studies were performed on Astragaloside IV, no clinical research on it is available to date. Extracts (RG-II, CVT-E002, and ginsan) from Panax were also reported acting on different pathways involving GATA3/T-bet, MAPK, and Tregs and arachidonic acid metabolism pathway in animal models or in vitro. But whether these results can be applied to human is undetermined. It is worth mentioning that CVT-E002 has been well proved to reduce respiratory infection in patients with chronic lymphocytic leukemia and prevent acute respiratory illness in institutionalized older adults [98,99]. It may partly reflect its immunoregulation function on human. Researches on its efficacy on asthma patients are desired. Saururus chinensis (SC) is an effective antioxidant. Three of its extracts (saucerneol D [54], a subfraction of its ethanol extract, and sauchinone [55]) showed similar effect of antioxidant through upregulating the expression of HO-1. Among them sauchinone also suppressed GATA-3 activity [56]. Recently, a novel extract of SC named meso-Dihydroguaiaretic acid was expounded by Song and his colleagues [31]. It exhibited a protective effect on allergic airway inflammation through inhibiting the Th2 inflammation attributed to its inhibition of the NF-B and MAPK. Besides, its ethanol extract's action on arachidonic acid metabolism pathway was reported in this year [57]. Antiasthmatic effect for SC extracts is drawing wide attention in the last few years. Clinical Studies The clinical research of herbal therapy application in asthma is limited. Three meta-analysis or systematic reviews were published successively in 2007, 2008, and 2010 [100][101][102]. However, all of them could not get sufficient evidence to make recommendations for herbal treatment for asthma after comprehensive analysis of the efficacy and safety. Arnold et al. [100] evaluated effects of herbal medicines on lung function, reduction in use of corticosteroids, symptom scores, physical sign scores, use of reliever medications, health related quality of life, and adverse effects comparing with placebo, involving 21 different herbs or herbal formulas. Although a few of them had some effects on relief of symptoms, only boswellic acids (isolated from Boswellia) were reported to exert a relatively comprehensive effect on lung function, while the effects of other herbs were limited or inexact. In Clark et al. 's study [102], Mai-Men-Dong-Tang, Pycnogenol, Jia-Wei-Si-Jun-Zi-Tang, and Tylophora indica also showed potential to improve lung function. Moreover, 1.8-cineol (eucalyptol) was observed to reduce the use of corticosteroids and corticosteroid reduction tolerance (<7.5 mg) in both of their studies [100,102]. In the last five years, some new clinical trials on herbal treatment emerged but most of them still focus on the efficacy for patients using an intact traditional or modified formula. The application of monomer extracts in clinic usually acts as adjuvant of standard asthma therapy according to the 8 Mediators of Inflammation recent studies. In a noncomparative, multicenter trial [103], 148 patients with mild asthma taking ICS received NDC-052 (an extract from Magnoliae flos) for eight weeks. Their results showed that add-on NDC-052 besides ICS therapy had benefits in both ΔPEFR and asthma symptoms. Last year, a review by Ammon [104] showed multiple effects of Boswellia serrata extracts on immune system modulation in basic research, including inhibiting activation of NF B, mast cell stabilisation, and antioxidant and inhibitory action on 5-LOX. However, related clinical research in the past five years can only be found for reducing the need for inhalation therapy with ICS + LABAs [105]. Although its significant effect on lung function had been analyzed by Arnold et al. as mentioned before [100], further research is still lacking. Regarding intact traditional or modified formula for asthma reported recently, there are three types of research interests in the past five years. First, herbal formula is singly used to relieve asthmatic symptom. These studies paid attention to the syndrome scores and frequency of asthma acute attack although these formulas may have limited efficacy on asthma process or lung function improvement. Geng et al. [106] performed a randomized, single-blind, placebo controlled trial (sample size = 60) on intermittent asthmatic children aged 2 to 5 years. Their modified formula contained Radix Astragali Mongolici 10 g, Rhizoma Polygonati Odorati 10 g, Fructus Ligustri Lucidi 10 g, Fructus Psoraleae 10 g, Radix Pseudostellariae 3 g, Fructus Schisandrae Chinensis 3 g, Fructus Jujubae 10 g, Concha Ostreae 10 g, and Endoconcha Sepiellae 10 g. Their results showed that the formula reduced the number of intermittent asthma attacks, decreased the syndrome scores, and reduced the airway resistance in the children. Homogeneously, another formula, "Zhisou Powder," is also observed to decrease the syndrome score and cough score of cough variant asthma, but it has no effect on airway responsiveness [107]. Second, maybe the most popular direction is usage of herbal formula as add-on therapy of standard medication to relieve asthmatic symptom and recurrence rate or strengthen the effect of standard medication. A randomized controlled research on 143 patients with moderate-severe asthma was performed by Tang and colleagues [108]. They applied their formula (containing 21 herbs and excipients) as add-on therapy of standard medication and showed decrease of exacerbation frequency and improvement of related syndrome scores, for example, asthma control test (ACT) score. Lin et al. [109] studied the effect of Astragalus plus hormone treatment in 90 asthmatic children. They showed that the effective rate of the Astragalus plus hormone group is significantly higher compared to using Astragalus or hormone only. The levels of PEFR and IFN-significantly increased and IL-4 obviously decreased in their effective cases. Besides, in another study, benefit of "chaipo granule" combined with routine treatment on refractory asthma was also discovered. "Chaipo granule" can act as a synergist of routine treatment according to their results [110]. Similar strengthening effect was also reported in the formula "Yupingfeng powder" [111]. Third, study in this research interests is according to a traditional administration method in China. Chinese called it "Jiu" (moxibustion), which uses herbs burning at the related acupoint of patients. Peoples today modified the method and are using percutaneous absorption herbal patch to treat asthma. Typically, Chen et al. [112] compared their modified percutaneous absorption herbal patch with salmeterol fluticasone inhalation for asthma of paracmasis. Their results showed a significant improvement in clinical symptom scores. Conclusion Remarks After all the above, it can be noticed that herbal therapy mainly applied to mild asthma or acted as adjuvant of standard asthma therapy in clinical researches. It seems to become a trend. It might be a good way to use complementary and alternative medicine (CAM) to help control symptom and reduce the drug dose for those patients receiving standard medication, as it has been proved by several researches [100][101][102][103][104][105]108]. However, compared with the prosperous basic researches, clinical researches on asthma herbal therapy are relatively poor. It may be because many herbal extracts targeting on one or two factors of asthma may not be that meaningful when applied in clinic, as asthma is a disease with complex and multiple mechanisms. On the other hand, as the herbs were usually administrated as formula, simultaneously using extracts from different herbs with different mechanisms can also be a good research direction. It can be expected that they act synergetically and perform significant benefit on asthma when used simultaneously. Administrating purified extracts synergetically will have more precision than using intact formula directly. Despite the remarkable achievements about herbal treatment for asthma during the past several years, several points should be addressed or be kept in mind in future studies. (1) Quality control of herbal medicines is always problematic, due to lack of standard procedures to make herb extractions, decoctions, or formula. Herbal patent drugs are generally of better quality, but there are few ones available for asthma researches. (2) Apparently, there is an urgent need for translational studies to transfer the current achievements on herbal medicines from animal works to clinical trials and finally to develop new clinical therapy. Current clinical trials for herbal medicines are limited, and thus more and large scaled, multicentre ones are extremely needed. Fortunately, well-designed and well-performed clinical trials for herbal medicines were appreciated by world-class journals [113], encouraging similar studies in asthma. (3) For clinical studies, the usage of herbal medicines for stable asthma is recommended. Asthma research in animal models usually cannot mimic the distinction between stable asthma and exacerbation. Herbal medicines might help to control stable asthma, while they may have limited effects during asthma exacerbations, since the herbs generally take longer time to exert therapeutic effects. (4) Clinical studies of herbal medicines in asthma control should target two major different goals. One is to take herbal medicines as a sole asthma control strategy, and the other is to use them as an add-on therapy for standard strategies, that is, to enhance the efficacy of or to reduce the usage of ICS. (5) Most of the current researches focused only on the eosinophilic phenotype of asthma, and few researches addressed other phenotypes like neutrophilic asthma, as the latter is usually severe asthma and is hard to control by current approaches. Thus, it is encouraged to conduct studies, both basic and clinical, to investigate the possible roles of herbal medicines in control of severe or neutrophilic asthma. Nevertheless, herbal medicines for asthma hold out a cheerful prospect, which will eventually help to reduce the morbidity and mortality and increase the control levels of asthma worldwide.
2018-04-03T06:02:43.858Z
2016-07-04T00:00:00.000
{ "year": 2016, "sha1": "24bbaa8e9e3cf5f213c121b7aeedf3c8887a1e80", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mi/2016/6943135.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fae26dd845e42cbf705e578c2b0b24fbc55e923d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
250150093
pes2o/s2orc
v3-fos-license
Development of Prediction Model to Estimate the Risk of Heart Failure in Diabetes Mellitus Background Heart failure (HF) is a leading cause of mortality and disability in patients with diabetes mellitus (DM). The aim of the study is to predict the risk of HF incidence in patients with DM by developing a risk prediction model. Methods We constructed a regression model based on 270 inpatients with DM between February 2018 and January 2019. Binary logistic regression was applied to develop the final model incorporating the predictors selected by least absolute shrinkage and selection operator regression. The nomogram was estimated with an area under the receiver operator characteristic curve and calibration diagram and validated with the bootstrap method. Results Risk factors including age, coronary heart disease (CHD), high-density lipoprotein (HDL), and low-density lipoprotein (LDL) were incorporated in the final model as predictors. Age ≥ 61 years old, LDL, and CHD were risk factors for DM with HF, with odds ratios (ORs) of 32.84 (95% CI: 6.74, 253.99), 1.33 (95% CI: 1.06, 1.72), and 3.94 (95% CI: 1.43, 13.43), respectively. HDL was a protective factor with an OR of 0.11 (95% CI: 0.04, 0.28). The area under curve of the model was 0.863 (95% confidence interval, 0.812∼0.913). The plot of the calibration showed that there was a good consistency between predicted probability and actual probability. Harrell’s C-index of the nomogram was 0.845, and the model showed satisfactory calibration in the internal validation cohort. Conclusion The prediction nomogram we developed can estimate the possibility of HF in patients with DM according the predictor items. INTRODUCTION The incidence and mortality rate of diabetes mellitus (DM) are increasing in recent years, especially in developing countries (1,2). Cardiovascular disease (CVD) is a major complication of blood glucose dysregulation (3). Patients with diabetes have a 2-to 4-fold increased risk of heart failure (HF) compared with those without diabetes (4). There is higher prevalence, incidence, and mortality in diabetic patients with HF compared with those with diabetes who remain HF-free (5)(6)(7). In population-based studies, concomitant DM increases the risk of death in both hospitalized and ambulatory patients with HF (8)(9)(10)(11). It is important to note that even in patients with prediabetes, the risk of HF is increased and associated with poor prognosis (12,13). DM and HF have considerable morbidity and mortality, when they occur together, which further worsens adverse patient outcomes, quality of life, and costs of care (4). It is important to find the risk factors of diabetes complicated by HF. Therefore, we aimed to develop a simplified prediction model to identify the high-risk of HF early in diabetic patients and conduct early intervention for them. Patients We conducted a trial on a population of resident patients diagnosed with DM in Southern Medical University (The First People's Hospital of Shunde) between February 2018 and January 2019. In total, 270 patients were included. Our outcome was diabetes mellitus with heart failure (DM-HF) disease. We excluded patients who had suffered from DM-HF and received treatment. DM was defined as use of medications for diabetes or fasting blood glucose ≥ 7 mmol/L and/or hemoglobin A1c (HbA1c) ≥ 6.5% (14). The HF diagnostic criteria were in accordance with Chinese Guidelines for the Diagnosis and Treatment of Heart Failure 2018 (15). Studies involving human participants were reviewed and approved by the Ethics Committee of Southern Medical University (The First People's Hospital of Shunde). The approval number from the Ethics Committee is 20190525. The patients/participants provided their written informed consent to participate in this study. Data Collection We collected baseline data from the patients at early hospital admission: (1) patient characteristics including age group (≤ 29 years, 30∼60 years, and ≥ 61 years), sex, and history of smoking and drinking; (2) clinical laboratory data included high-density lipoprotein (HDL), low-density lipoprotein (LDL), estimated glomerular filtration rate (eGFR), and aldehyde dehydrogenase 2 (ALDH2) gene; (3) cardiovascular conditions including blood pressure and history of coronary heart disease (CHD). We combined the results of published studies with actual clinical examination (16)(17)(18)(19)(20), and we used age group, sex, history of smoking and drinking, HDL, LDL, ALDH2 and cardiovascular conditions as the predictor variables. Model Development and Validation Least absolute shrinkage and selection operator (Lasso) logistic regression was conducted to select the optimal predictive factors. A model with excellent performance and the least number of independent variables was given when we adopted the lambda.1se. A multivariate logistic regression analysis was conducted to develop a prediction model by incorporating the variables selected in the Lasso model. The created model was tested in internal validation with the bootstrap resampling technique, in which regression models were fitted in 100 bootstrap replicates. The performance of the model was expressed as the following indicators: 1. A calibration plot was used to estimate the calibration of the final model and bootstrap validation. The difference between the average of observed outcomes and the average of predicted probabilities was reflected with the plot. 2. A R 2 statistic was conducted to evaluate the goodness of fit test of the model. The closer the value to 1, the better the model fits the sample observations. 3. The area under curve (AUC) was used to evaluate the discrimination of the model. If the model can better differentiate among patients who did or did not have HF, the result of AUC was close to 1. 4. Brier score was used to measure the error between the probability of the category predicted by the model and the real value. All the development and validation of model were analyzed with R version 4.1.2 (R Foundation for Statistical Computing, Vienna, Austria). The R code of the developed model is available in Supplementary Material 1. Our study was in accordance with the TRIPOD statement (Supplementary Material 2). Continuous data were expressed as median (25th, 75th) and categorical variables as frequencies and percentages. Some data were missing for all the risk factors except for age, sex, eGFR, and the ALDH2 gene. We filled in missing values with the method of mean/mode completer. Development of the Model Lasso regression was used to screen the four indicators involved in the establishment of the model (Figures 1A,B) Performance of the Model and Internal Validation We drew calibration curves to assess the degree of calibration of the risk prediction and internal validation of the DM-HF model (Figures 2A,B). As shown in Figure 2A, the x-axis represents the predicted risk for DM-HF, and the y-axis represents the actual probability of DM-HF. The dotted line represents the realized prediction power, and the solid line represents the prediction of an ideal model. The results showed a good consistency between the nomographic model and the ideal model. The calibration curve of the model also demonstrated a good agreement in the bootstrap validation cohort (Figure 2B). Table 3 show that the final logistic regression model has good discrimination for DM-HF (AUC = 0.863; 95% CI: 0.812∼0.913; R 2 = 0.477; Brier-score = 0.128). The results of the internal validation indicated that there was negligible model Model Presentation The final prediction model of DM-HF was displayed as a nomogram (Figure 3). We can score according to each variable, and the sum of all the variable scores is the total score. The total score on the DM-HF-predicted value axis represents the probability of DM-HF. The higher the score, the higher the risk that a patient with DM will develop HF. The web calculator can be linked through this URL: Nomogram For Diabetes Mellitus with Heart Failure (shinyapps.io). DISCUSSION In our study, a simplified model to predict the risk of DM-HF was constructed and successfully internally validated. The model showed good calibration and discrimination. We also provided the nomogram and web calculator to help one in calculating the risk. Our study showed that age ≥ 61 years, LDL, and CHD were risk factors for DM-HF, and that high HDL was a protective factor for DM-HF. A multicenter study including 4,447 people concluded that more attention should be paid to elderly people to follow up on their risk of HF (21). In a systematic review and meta-analysis, the association between incident HF and 5year increase in age (1.47; 1.25-1.73) was reported (19). In many studies, age was a risk factor for DM-HF (22,23). Diabetic patients with low HDL level [3.62 (2.06-6.36)] have a higher risk of HF (24). Dyslipidemia is a risk factor for DM-HF. Lowering the level of LDL and increasing the level of HDL are engaging goals for reducing the risk of HF in patients with DM (25). The most common cause of HF is ischemic heart disease owing to impaired myocardial perfusion, while there are other common causes including DM and CHD (26). Murtaza et al. believed that longterm diabetes leads to structural and functional changes in the development and progression of HF, independent of myocardial ischemia or microvascular atherosclerotic disease (27). Despite some published models can predict the risk of HF in patients with diabetes, our Lasso regression model still has its unique advantages. On the one hand, the population we included in our study was different from theirs. The data they used came from the Action to Control Cardiovascular Risk in Diabetes Study Group (ACCORD) trial, which was conducted in 77 centers across the United Stated and Canada (28,29). All participants had established atherosclerotic coronary vascular disease or were 55-79 years of age with documented atherosclerosis, albuminuria, left ventricular hypertrophy, or two or more other cardiovascular risk factors, while our data came from Southern Medical University (The First People's Hospital of Shunde), which is located in Chinese mainland and covers a catchment area of 3.2 million residents. On the other hand, we used a different statistical tool to construct the prediction model and visualization tools to show the model. There are some limitations in our study. First, this is a retrospective study, and the results are inevitably biased. Second, the sample size in our research is very limited. In our subsequent research, we will enroll more patients for external verification. Third, some biomarkers had been well established to be associated with the risk of HF, including N-terminal pro B-type natriuretic peptide and soluble suppression of tumorigenesis-2 (30,31). Furthermore, some other novel biomarkers, such as secreted frizzled-related protein 2 (SFRP2) (32,33), trimethylamine N-oxide (TMAO) (34), and polyunsaturated fatty acids were also associated with the risk of HF (35). However, we did not include these biomarkers in our prediction model, as we aimed to provide metrics that can be easily extracted from clinical data. In conclusion, we conducted Lasso regression to screen variables and built a risk prediction model for diabetic HF. The model showed good discrimination and calibration in internal validation. A nomogram and a webpage calculator based on the model can make patients or doctors quickly calculate the risk of diabetic HF, which can help patients with diabetes reduce this risk better. DATA AVAILABILITY STATEMENT The datasets presented in this article are not readily available to uphold patient/participant privacy. Requests to access the datasets should be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Southern Medical University (The First People's Hospital of Shunde) of Ethics Committee. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS HQ and WL contributed to the conception and design of the study. CW, PY, and WL organized the database. HQ performed the statistical analysis and wrote the manuscript. All authors contributed to manuscript revision and read and approved the submitted version.
2022-07-01T13:10:27.846Z
2022-07-01T00:00:00.000
{ "year": 2022, "sha1": "8ac0b14a20f8e5fcd08ad8e37119befb78b72532", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "8ac0b14a20f8e5fcd08ad8e37119befb78b72532", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
14503577
pes2o/s2orc
v3-fos-license
Cognitive Screening in Brain Tumors: Short but Sensitive Enough? Cognitive deficits in brain tumors are generally thought to be relatively mild and non-specific, although recent evidence challenges this notion. One possibility is that cognitive screening tools are being used to assess cognitive functions but their sensitivity to detect cognitive impairment may be limited. For improved sensitivity to recognize mild and/or focal cognitive deficits in brain tumors, neuropsychological evaluation tailored to detect specific impairments has been thought crucial. This study investigates the sensitivity of a cognitive screening tool, the Montreal Cognitive Assessment (MoCA), compared to a brief but tailored cognitive assessment (CA) for identifying cognitive deficits in an unselected primary brain tumor sample (i.e., low/high-grade gliomas, meningiomas). Performance is compared on broad measures of impairment: (a) number of patients impaired on the global screening measure or in any cognitive domain; and (b) number of cognitive domains impaired and specific analyses of MoCA-Intact and MoCA-Impaired patients on specific cognitive tests. The MoCA-Impaired group obtained lower naming and word fluency scores than the MoCA-Intact group, but otherwise performed comparably on cognitive tests. Overall, based on our results from patients with brain tumor, the MoCA has extremely poor sensitivity for detecting cognitive impairments and a brief but tailored CA is necessary. These findings will be discussed in relation to broader issues for clinical management and planning, as well as specific considerations for neuropsychological assessment of brain tumor patients. INTRODUCTION Cognitive function is an independent prognostic factor in the survival of glioma patients (1,2). For brain tumors, cognitive assessment (CA) can inform clinicians of areas to target for neurorehabilitation (3), monitor progress to facilitate decision making about further intervention (4), and if there has been a decline in cognitive function, address the question of whether the tumor has recurred or progressed (3). In addition, a CA is able to address the question of whether subtle alterations in cognitive function are significant or not, particularly when monitoring slow-growing low-grade gliomas (4). Assessment of cognitive status can be undertaken with a brief cognitive screen or by a longer formal neuropsychological evaluation. Cognitive screening is typically used in acute states, at bedside, hence the focus of our study is to identify whether a brief CA can be tolerated and completed in a relatively acute state (post-surgery but <3 months) and, if so, whether this yields better results in terms of detecting cognitive deficits. Cognitive screening tools are popular but their sensitivity to cognitive impairment in general, and specifically for brain tumor patients, has been questioned (4). One reason may be that brain tumor-associated cognitive deficits have been thought to be relatively mild and non-specific (5), although this has recently been challenged (6). It is unsurprising that severity and specificity of cognitive deficits in brain tumor patients has been debated as prevalence rates vary from 29 to 91%. This variability may depend on several factors including time of assessment (pre-or postsurgery), tumor grade, treatments (radiation, chemotherapy), and lesion location (7). However, the main reason for this variability may be the method used to assess cognitive functions. For example, in one study, few patients with low-grade gliomas showed cognitive deterioration when screened with the mini-mental state examination (MMSE) (8), irrespective of radiation treatment (9). By contrast, Tucha and colleagues (10) investigated cognitive function with neuropsychological tests and reported that 91% of patients with frontal or temporal tumors were impaired in at least one cognitive domain. In this study, we aimed to investigate the most effective and efficient method of detection of cognitive impairments in the acute period following tumor resection by directly comparing a cognitive screening tool with a brief but domain-specific CA. Cognitive screening tools have the advantage of brevity and simplicity of administration. The main question, however, is whether these tools are sensitive to detect abnormalities. In the last decade, the Montreal Cognitive Assessment (MoCA) (11) screening tool has been increasingly favored over the MMSE as it has been shown to have greater sensitivity for detecting cognitive dysfunction. This has been shown in patients with brain tumors (12) and brain metastases (13), as well as in other neurological conditions including stroke (14), sub-arachnoid hemorrhages (SAHs) (15), and silent cerebral infarcts (16). Bernstein et al. investigated the psychometric properties of the MoCA in three diverse brain www.frontiersin.org pathologies and concluded that it was reliable in detecting cognitive dysfunction as well as having the benefit of not fatiguing the patient (17). However, regardless of which cognitive screening tool has the greatest sensitivity, the original purpose of these tools was to detect global or generalized decline rather than domain-specific cognitive deficits. Indeed, the need for domain-specific cognitive tests for the brain tumor population was recently highlighted by a study of glioma patients (6). In this study, a range of specific visuospatial deficits were identified in right parieto-temporal gliomas that were not present in patients with prefrontal tumors Thus, it remains uncertain whether cognitive screening tools are sensitive to identify mild and/or focal deficits in brain tumors (4,12). Neuropsychological evaluations are held to be the "gold standard" for assessment of cognitive functions in focal neurological disorders like stroke (15,18). However, evaluations differ in test composition and can range from long and comprehensive, with a fixed test battery, to brief and flexible, with tests chosen to assess specific cognitive domains (19). One advantage of neuropsychological evaluation is the freedom to include tests that tap specific cognitive functions, depending on tumor location and presenting symptoms (4). On the other hand, the main criticism is the length of assessment that can range from brief (1-2 h) to lengthy (8 h). Length is a particular issue in brain tumor patients as physical and mental fatigue has specifically been identified as a concern (12,20). In fact, Olsen and colleagues (12) found a selection bias in which patients were willing to complete a 4-h neuropsychological assessment. In particular, they identified that those who completed both the 4-h assessment and cognitive screening tests, tended to be younger with a higher level of education, they obtained a higher MoCA score and were on lower doses of medications. Thus, Olsen and colleagues, like Papagno et al. (4), concluded that a brief and well-tolerated CA is desirable, when diagnostic accuracy can be maintained. Neuropsychological evaluation has been compared to cognitive screening tools. As noted above, Olsen and colleagues (12) compared neuropsychological assessment to both the MMSE and the MoCA. The MoCA showed greater sensitivity to cognitive dysfunction than the MMSE; however, the main conclusion was that inclusion of a 4-h neuropsychological assessment was a significant deterrent for participation. The MMSE and MoCA, compared to neuropsychological assessment and return to work status, have been investigated in patients following aneurysmal SAH (15). In their study, 42% of patients were impaired on the MoCA, compared to none on the MMSE, and the MoCA correlated with domain-specific cognitive tests while the MMSE showed no association with specific tests. In addition, two MoCA items were associated with return to work. The MoCA was concluded to be more sensitive than the MMSE in SAH; however, it was not clear that the MoCA had sufficient sensitivity when compared to the neuropsychological assessment (15). Recently, a large retrospective study of acute stroke has unequivocally demonstrated that the MoCA underestimated cognitive impairment, compared to a brief 1-2 h neuropsychological assessment (18). The current study compared the MoCA cognitive screening tool with a brief 1-1.5 h neuropsychological evaluation in primary brain tumors. The neuropsychological evaluation comprised a CA and mood and behavioral assessments as this is thought important to fully characterize level of function and inform care plans (7). The aim was to ascertain whether the MoCA is sufficiently sensitive to detect cognitive impairment at an acute, post-resection time point or whether a brief but domain-specific CA is necessary. PATIENTS Thirty-six patients with primary brain tumors (low-or high-grade gliomas, meningiomas) were recruited by the Brain Tumor Nurse Practitioner (VB) from BrizBrain and Spine, The Wesley Hospital, Brisbane, QLD, Australia. Ethical approval for the study was granted by the UnitingCare and The University of Queensland Human Research Ethics Committees. Informed and written consent was obtained from all patients. Inclusion criteria were (1) confirmation of brain tumor ascertained by MRI and (2) all patients underwent surgical resection prior to the investigation of cognitive functions. The cognitive screening tool was administered before the CA, which was completed in one testing session. The third (3) inclusion criterion was that the cognitive screening tool and CA were completed within the same week to minimize effects due to timing of cognitive screening or assessment. Thus, due to the latter, only 23 patients aged 18-69 years old were included. The mean time between surgical resection and neuropsychological evaluation was 2.1 months (SD = 3.1; see Table 1 for patient characteristics). We note that 2.1 months is sufficient time to allow for findings to be useful for neurorehabilitation (if available), planning for management of deficits for the patient and family/carers, and to address any questions related to returning to community roles at home or work. COGNITIVE SCREENING The MoCA (11) was used as the screening tool. Although it was developed as a brief measure of global cognitive function, it contains items that measure these cognitive domains: visuospatial/executive function; naming; memory; language; abstraction; and attention. Specifically, the MoCA is scored out of 30 points comprising these items: brief trail making, cube copy, and clock drawing (visuospatial/executive domain = 5 pts); animals to name (naming domain = 3 pts); five words to recall (memory domain = 5 pts); three brief attention tasks (attention domain = 6 pts); sentence repetition and word fluency (language domain = 3 pts); similarities (abstraction domain = 2 pts); time/place questions (orientation domain = 6 pts). A normal score is 26 or above. Cognitive assessment A brief but tailored CA was administered that was completed in 1-1.5 h, depending on individual patient's level of fatigue and ability. The CA was devised based on neuropsychological assessment principles and assessment of standard cognitive domains, detailed in Cipolotti and Warrington (21). The cognitive tests were specifically chosen based on Robinson's recent lesion studies of brain tumor and stroke patients with focal frontal and non-frontal lesions [e.g., Ref. (22,23)]. A similar approach was adopted by Papagno and colleagues (4) in their recent study of low-grade gliomas. Thus, estimated pre-morbid level of intelligence was ascertained in a Frontiers in Oncology | Neuro-Oncology Tumor location (L/R) 12/11 Mood and behavior assessment As part of the neuropsychological evaluation, level of self-reported anxiety, depression, and apathy were assessed using the Hospital Anxiety and Depression Scale (HADS) (37) and the Apathy Evaluation Scale (AES) (38). A score on the HADS of 7 or below is in the normal range with a score at or above 11 indicating significant levels of anxiety or depression. The AES results in scores between 18 and 72, with higher scores indicating increased apathy and a score of 41 suggested as the cut-off. Analyses The MoCA and domain-specific cognitive tests were administered and scored in the standard and published manner. Patients were classified as cognitively intact on the MoCA if they obtained a score of ≥26 or impaired if they scored <26 (11). For each individual cognitive test, patients were classified as cognitively impaired if they scored <5th percentile (i.e., 5% cut-off), with an intact performance ≥5% cut-off [for similar methodology, see Ref. (18,39)]. For the Proverb Interpretation Test of verbal abstraction, an impaired performance was a score of <5/8 [for scoring details, see Ref. (24)]. Performance was analyzed in several ways. First, we calculated a broad measure of impairment for both the MoCA and the CA. For the MoCA, the number of patients impaired is reported. For the CA, we calculated the number of patients impaired on any test and also the number of cognitive domains each patient was impaired in (i.e., 0-6 cognitive domains). Second, based on the method adopted by Chan and colleagues for stroke patients (18), we conducted two specific analyses: (1) MoCA-Intact patients were investigated for impairment in each cognitive domain assessed by the CA; and (2) Patients who scored the maximum points in each of the MoCA-specified cognitive domains, irrespective of the overall MoCA score, were analyzed in terms of discrepancy between this and performance on the domain relevant CA test. We also analyzed whether the MoCA-Impaired patients were impaired in at least one cognitive domain. RESULTS For the first broad measure, we found that 30.4% (7/23) of our patients were impaired on the MoCA as they scored <26. A summary of the MoCA, demographic, and mood and behavior scores for the whole group, and the MoCA-Intact and MoCA-Impaired sub-groups, are contained in Table 1. As expected, the MoCA score for the impaired group was significantly lower than the intact group, t (21) = 6.31, p < 0.001. Apart from slightly higher selfreported symptoms of depression by the MoCA-Impaired group compared to the MoCA-Intact group, t (15) = 2.16, p < 0.05, the two groups were well matched for age, gender, education, pre-morbid intelligence, and chronicity (time since surgery; all p > 0.05). Similarly, there was no difference between these two groups in self-reported anxiety or apathy. With regard to symptoms of depression, we note that the mean of both groups is in the "normal" range and not indicative of clinical or subclinical depression. If we examine individual scores, one patient in each group (MoCA-Intact and -Impaired) was in the abnormal range. For anxiety, abnormal scores were obtained by three patients in each group (MoCA-Intact and -Impaired). Finally, both groups reported mildly elevated levels of apathy with a number of patients in both groups above the suggested cutoff (11 in the MoCA-Intact and 4 in the MoCA-Impaired group), which may reflect the acute post-resection stage of assessment. www.frontiersin.org For the CA broad measure, 69.6% (16/23) of the patients were impaired on at least one domain-specific cognitive test. The means and SDs for the whole group, and the MoCA-Intact and MoCA-Impaired sub-groups, are reported in Table 2. Overall, there was no difference between sub-groups in performance on 9 of the 11 cognitive tests (i.e., p > 0.05), which supports specific patterns of cognitive deficits rather than a generally lower performance of the MoCA-Impaired patients. By contrast, the MoCA-Impaired group performed significantly poorer on the Graded Naming Test of language, t (21) = 2.567, p < 0.05, and the phonemic word fluency test that is sensitive to executive dysfunction, t (21) = 2.363, p < 0.05. The number of cognitive domains each patient was impaired in was as follows: 4/16 impaired in one domain; 4/16 impaired in two domains; 6/16 impaired in three domains; and 2/16 impaired in four domains. Thus, 75.0% of the impaired patients were impaired on tests in at least two cognitive domains. For the specific measures based on Chan et al. (18), first we investigated the 16 MoCA-Intact patients for impairment in each cognitive domain assessed by the CA. Of these patients, 56.3% were impaired in at least one of the six cognitive domains. The percentage of MoCA-Intact patients impaired on domain-specific cognitive tests is shown in Figure 1. The main cognitive domains impaired for MoCA-Intact patients were abilities related to higher level executive functions, including abstract reasoning, followed by attention and memory. By contrast, language was only impaired in <10% and no patient was impaired on the test of visual perception. For the second specific measure, we examined patients who scored the maximum points in each of the MoCA-specified cognitive domains, irrespective of the overall MoCA score. Based on Chan et al. (18), we analyzed the discrepancy between this and Table 3. For the MoCA-Impaired patients, 100% were impaired in at least one cognitive domain on the CA. Thus, when a patient obtains an impaired score on the MoCA this fully predicts significant impairment in at least one domain on CA. By contrast, the implications for cognitive function is less certain when a "normal" MoCA score is obtained as the MoCA showed very poor negative predictive value (0.44). Further, sensitivity for detecting cognitive impairment is extremely poor (0.44) in our primary brain tumor sample. DISCUSSION In our unselected primary brain tumor sample, only 30.4% were impaired on the MoCA cognitive screening tool. By contrast, for the CA, 69.6% of patients were impaired on at least one domainspecific cognitive test and, of these, 75% were impaired in at least two cognitive domains. If we examine the MoCA-Intact patients, more than half (56.3%) were impaired in at least one of the six cognitive domains. Specifically, 50% of the MoCA-Intact patients were impaired on tests of executive function, including abstraction, and a quarter of these patients were impaired in the domains of attention and memory. The level of sensitivity of 0.44 for the MoCA in our patients was far lower than for other neurological disorders. For example, the sensitivity of the MoCA in an acute stroke population was 0.82 (18) and, notably, assessments were completed at comparable times post-stroke or tumor resection. However, we note that the MoCA has been found useful in patients with brain metastases (13) and it is reported to be adequate for the detection of mild cognitive impairment in neurodegenerative disorders such as Alzheimer's and Parkinson's disease [e.g., Ref. (40)]. Nevertheless, the sensitivity of 0.44 of the MoCA for our primary brain tumor population is extremely poor. In light of this low detection rate of cognitive abnormalities, it is noteworthy that the mean MoCA score of 26.5 for our tumor patients is relatively high and indicative of mild global cognitive impairment. This was also the case for our mood and behavioral measures of anxiety, depression, and apathy. More specifically, the "MoCA-Intact" group obtained a score almost identical to the normal controls reported by Nasreddine et al. (11) while the mean score of 24 for the "MoCA-Impaired" group falls toward the top of the "mild cognitive impairment" group. The overall "mild" level of impairment on the MoCA in our sample differs from the lower MoCA mean score of 22 in patients with brain metastases (13). In fact, Olsen et al. suggested that the MoCA score may be helpful in this population as patients with low MoCA scores may be less likely to benefit from palliative whole-brain radiotherapy while patients with high MoCA scores may tolerate more intensive interventions (13). Thus, for prognostic and treatment purposes in brain metastases, the MoCA may be useful. However, our results at a global level support the notion that primary brain tumor-associated cognitive deficits are indeed mild and/or focal and are hard to detect using global screening tools like the MoCA. For the 69.6% of patients impaired in at least one cognitive domain on the CA, executive functions and abstract reasoning were the most common domains impaired by far. In fact, 87.5% of patients were impaired in these two domains and the remaining two patients presented with a selective nominal aphasia. This is followed by attention (43.8% impaired) and memory (37.5% impaired). These cognitive domains being the most often impaired is consistent with the findings of Tucha et al. (10) for frontal and temporal tumor patients. Interestingly, of the two executive function tests, phonemic word fluency and the Hayling Test, 52.2% of all patients were impaired on just one test, the Hayling Test, which suggests that test choice is critical. With regard to memory, the MoCA does not assess visual memory and 21.7% of our patients were impaired on our specific visual memory test. By contrast, the intact performance of all our patients on our test of visual perception does not reflect the finding of Shallice and colleagues (6) of visuospatial deficits in right posterior tumor patients. There are two possibilities for the apparent disparity. One, our specific test of visual perception is not sensitive to mild deficits. Two, our seven patients with right posterior tumors are remarkably intact. Upon examination of individual patients, one right temporal MoCA-Impaired and three right posterior MoCA-Intact patients lost points on the MoCA-specified visuospatial items. In addition, one of the MoCA-Intact patients presented with a highly selective apperceptive amusia in the context of an otherwise intact cognitive profile (41). This latter case, in addition to the two patients with a selective nominal aphasia, highlight the potential for any cognitive deficit to be specific and focal in brain tumor patients, thus, necessitating freedom in test choice based on symptoms and/or tumor location. Notably, patients who performed well on MoCA-specified domains were not always intact on the specific cognitive test, similar to Chan et al.'s findings in acute stroke patients. This is particularly so for the abstraction and executive/visuospatial MoCA-specified domains that are assessed by one item each, both clearly insensitive to our patients' deficits. By contrast, the two MoCA-specified domains that most closely resembled the CA impairments were language and memory. In terms of language, only 30.4% of patients scored full marks on the MoCA-specified items that comprised a sentence repetition item (>10 words in length) and a phonemic word fluency task. Of these two items, phonemic word fluency was one of two standard cognitive tests that MoCA-Impaired patients performed significantly poorer than MoCA-Intact patients and it can be classed a test of executive function. If we examine naming ability, almost all patients obtained full marks for the MoCA naming items, although 17.4% of all patients were impaired on the standard Graded Naming Test. Very few patients obtained full marks on the MoCA-specified memory items although, as noted above, a main limitation of the MoCA is that visual memory is not assessed. The inclusion of all types of brain tumors in our study could be argued to limit our findings. This is unlikely for two reasons. First, in our study, patients with both meningiomas and gliomas (high/low-grade) were in the MoCA-impaired group (see Table 1). Secondly, in a recent study specifically investigating the effect of etiology on cognitive performance in patients with focal frontal lesion, once age and pre-morbid intelligence were accounted for, www.frontiersin.org there were no significant differences between patients of different etiologies (stroke, meningioma, high/low-grade gliomas (42). One caveat, however, is the practical implications of treatments for different brain tumor types. For example, the timing of a brief CA in patients with higher grade gliomas who proceed to receive initial radiation or chemotherapy at 2-6 weeks post-resection, followed by a gap with no treatment and then adjuvant chemotherapy (43), needs to be considered in specific contexts. If a neuropsychologist is attached to an acute neurosurgical ward, then assessment prior to treatment can be included in routine planned care. If this is unavailable, then an optimal time would be in the gap between treatments, which would be approximately 8-10 weeks post-resection. Findings from a brief CA at either of these time points will be useful in further management, informing specific cognitive strategies/interventions and for the patient to understand changes in thinking related to their tumor. In summary, the MoCA has extremely poor sensitivity to cognitive impairment in our primary brain tumor sample, which means that if a "normal" MoCA score is obtained, a CA is necessary. Even if a patient is impaired on the MoCA, the severity may be underestimated and some areas of cognition are not assessed. In fact, only one MoCA-specified domain showed even remotely similar detection levels as a brief CA. A full discussion of other brief cognitive screening tools (e.g., ACE-III; CogMed) is beyond the scope of this preliminary study although we can speculate that similar issues would be revealed. Thus, despite the limitations of our small sample size, we strongly demonstrate that a brief and tailored CA lasting only 1-1.5 h is necessary and possible for the detection of cognitive impairments in primary brain tumor patients in the acute phase post-surgery. This is not only important for prognosis and monitoring, but it is crucial for neurorehabilitation and interventions (1,2,4). Moreover, mental deterioration, or fear of this, was rated as one of the highest concerns of patients and carers, contributing to quality of life (20). Our study suggests that the critical cognitive domains to assess are executive functions (initiation, suppression, abstraction), attention, memory (verbal and visual), and language (naming and verbal fluency). Finally, we highly recommend adopting the neuropsychological principle of tailoring an assessment based on lesion location and presenting symptoms.
2016-06-17T21:29:52.580Z
2015-03-11T00:00:00.000
{ "year": 2015, "sha1": "7476941950d3411656b0b031b9f46dad82aea8db", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2015.00060/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7476941950d3411656b0b031b9f46dad82aea8db", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15028551
pes2o/s2orc
v3-fos-license
Social Familiarity Governs Prey Patch-Exploitation, - Leaving and Inter-Patch Distribution of the Group-Living Predatory Mite Phytoseiulus persimilis Background In group-living animals, social interactions and their effects on other life activities such as foraging are commonly determined by discrimination among group members. Accordingly, many group-living species evolved sophisticated social recognition abilities such as the ability to recognize familiar individuals, i.e. individuals encountered before. Social familiarity may affect within-group interactions and between-group movements. In environments with patchily distributed prey, group-living predators must repeatedly decide whether to stay with the group in a given prey patch or to leave and search for new prey patches and groups. Methodology/Principal Findings Based on the assumption that in group-living animals social familiarity allows to optimize the performance in other tasks, as for example predicted by limited attention theory, we assessed the influence of social familiarity on prey patch exploitation, patch-leaving, and inter-patch distribution of the group-living, plant-inhabiting predatory mite Phytoseiulus persimilis. P. persimilis is highly specialized on herbivorous spider mite prey such as the two-spotted spider mite Tetranychus urticae, which is patchily distributed on its host plants. We conducted two experiments with (1) groups of juvenile P. persimilis under limited food on interconnected detached leaflets, and (2) groups of adult P. persimilis females under limited food on whole plants. Familiar individuals of both juvenile and adult predator groups were more exploratory and dispersed earlier from a given spider mite patch, occupied more leaves and depleted prey more quickly than individuals of unfamiliar groups. Moreover, familiar juvenile predators had higher survival chances than unfamiliar juveniles. Conclusions/Significance We argue that patch-exploitation and -leaving, and inter-patch dispersion were more favorably coordinated in groups of familiar than unfamiliar predators, alleviating intraspecific competition and improving prey utilization and suppression. Introduction Developing explicit foraging strategies for optimal resource exploitation is a major challenge for every animal. Accordingly, numerous theories strive to predict optimal foraging [1,2]. For many predators, food is not homogeneously distributed but aggregated in patches, separated by corridors and areas without prey or host items [3,4]. Consequently, decisions whether to stay or leave a patch with decreasing food and how to distribute among available patches, respectively, are crucial for optimizing resource exploitation and fitness. The optimal strategy is determined by numerous factors such as the current and future quality of present and surrounding patches, the travel time between patches and the risk of dying during travelling [5,6]. According to Charnov's [6] marginal value theorem, a forager should leave a food patch when its foraging rate drops below the average intake rate per patch of the entire habitat. The theory predicts the optimal strategy of a single animal without competitors but commonly animals do not forage alone. Moreover, it assumes that the forager has perfect knowledge about the environment, which is never the case in reality [1,7]. Theories accounting for the presence and influence of other foragers on inter-patch movement and patch occupation are the ideal free and ideal despotic distributions [8,9]. The ideal free distribution assumes similarity in competitive strengths of foragers, whereas the ideal despotic distribution assumes dissimilarity. Inherently linked to patch exploitation and inter-patch distribution is leaving the natal or original patch and search for new patches to colonize, i.e. dispersal [10]. The ideas of ideal inter-patch distribution linked to optimal patch exploitation and dispersal are especially applicable to group-living animals. Many animals live in groups, often but not exclusively as a result of the patchy distribution of their food [4,11]. Group-living may yield benefits such as enhanced antipredator success [12] and/or more efficient foraging [13], but entails also costs such as higher detectability by natural enemies [14], higher risk of disease transmission [15] and increased within-group competition for resources [16]. Group composition and within-group assortment are commonly non-random and may be influenced by group member characteristics such as sex [17], size [18], age [19], social rank [20], genetic relatedness [21] or social familiarity [11]. Thus, one major challenge of groupliving animals is to distinguish between group members and adjust patch exploitation, patch-leaving and inter-patch distribution according to group member characteristics. For example, if a patch is inhabited by genetically closely related individuals, dispersal may be a means to reduce inbreeding, avoid kin cannibalism [22] and relax kin competition for shared resources [10,[23][24][25]. The theory of population viscosity states that limited dispersal, i.e. staying disproportionally long in the original patch, increases the relatedness of individuals within this patch. Benefits from altruistic behavior are dispersed to kin within the patch but are limited due to increased local competition [26][27][28]. In hierarchically organized groups, the social rank may determine the onset of dispersal. Weaker or lower-ranking individuals may gain by leaving a group early [29,30]. However, joining a new group may require the re-establishment of social hierarchy entailing high costs [31]. Another prominent, yet in the context of dispersal and inter-patch distribution rarely addressed feature of group-living animals is social familiarity, independent of genetic relatedness. Social familiarity is based on the ability to learn the phenotypic features of conspecific individuals and allows to discriminate familiar and unfamiliar individuals [32,33]. Many group-living animals preferentially associate with familiar individuals [11,34] because it may enhance their performance in foraging [34][35][36], predator vigilance and anti-predator behaviors (Strodl and Schausberger, unpublished data and [37]), or development [38] and reproduction (Strodl and Schausberger, unpublished data). Social familiarity may also reduce agonistic behaviors such as territoriality [39,40] and intraspecific competition [41]. However, the influence of social familiarity on dispersal, particularly the trade-off between staying and leaving a patch, has been rarely experimentally addressed. The few available studies relate to exploratory behavior, which may be considered a specific form of dispersal if straying from the original site results in permanent leaving. Boldness in exploration has been shown to be influenced by familiarity with environmental features, incl. social familiarity, in domestic chicks, Gallus gallus domesticus [42,43] and guppies, Poecilia reticulata [44]. At the cognitive level, heightened boldness in exploration may be explained by attention shifts from otherwise attention demanding inspection of unfamiliar neighbors [45,46]. Here, we assessed the effects of social familiarity on patchexploitation, -leaving and inter-patch distribution of the groupliving, plant-inhabiting predatory mite Phytoseiulus persimilis exploiting two-spotted spider mites, Tetranychus urticae. P. persimilis is highly specialized on spider mite prey producing dense webbing. Patchy dispersion and group-living of the predators are largely determined by the distribution of its prey [47], but also by mutual attraction [34,48]. For patch-leaving decisions, P. persimilis integrates information from the current patch, such as density of prey and conspecifics [49], and of surrounding patches, such as volatiles indicating the presence of nearby prey [50]. Young gravid females are more dispersive than males and juvenile stages [47] and tend to leave the spider mite patches before local extinction, thereby leaving food for their offspring [51]. As a result of aggregation of the predator eggs within prey patches, after hatching, juvenile predators repeatedly encounter each other and socially familiarize, independently of genetic relatedness [52]. Familiar individuals are treated more favorably in agonistic interactions such as cannibalism [53][54][55][56]. Moreover, social familiarity may adaptively modulate within-group association, foraging, anti-predator and reproductive behaviors of group-living P. persimilis (Strodl and Schausberger, unpublished data and [34]). We hypothesized that social familiarity leads to optimization of local prey patch exploitation, patch-leaving and inter-patch distribution of group-living P. persimilis. In the 1 st experiment, we examined these behavioral characteristics in juvenile P. persimilis under limited prey on detached interconnected leaflets. In the 2 nd experiment, we assessed the influence of social familiarity on patch-exploitation, patch-leaving and inter-patch distribution of adult P. persimilis females under limited prey conditions on whole plants. Origin and Rearing of Experimental Animals The individuals used for the experiments were offspring from females withdrawn from a laboratory-reared population of P. persimilis, which had been founded with individuals field-collected in Greece. The stock population was held on an artificial arena consisting of a plastic tile placed on a water-saturated foam cube (1561565 cm) in a plastic box (2062065 cm) half-filled with water. The edges of the tile were covered with moist tissue paper. The predatory mites were fed by adding bean leaves (Phaseolus vulgaris L.) infested with mixed life-stages of T. urticae onto arenas in 2 to 3 d intervals. T. urticae was maintained on whole bean plants at room temperature and 16:8 h L:D photoperiod. Predator rearing units and experimental arenas and cages were stored in environmental chambers at 2561uC, 6065% RH and 16:8 h L:D photoperiod. Generating Socially Familiar Predators Leaf arenas used to obtain P. persimilis eggs for the experiments consisted of single leaflets of trifoliate bean leaves placed adaxial surface down on a water-saturated, filter paper-covered foam cube (1361364 cm) kept in a plastic box (2062065 cm) half-filled with water. Prey was provided by brushing mixed life-stages of T. urticae from infested bean leaves onto arenas. To obtain predator eggs of similar age, 30 to 40 gravid P. persimilis females were placed onto leaf arenas and allowed to oviposit for 6 h in experiment 1 and 24 h in experiment 2. In experiment 1, familiarization took place in acrylic cages. Each cage consisted of a circular cavity (15 mm ø) drilled into a 3 mm thick acrylic plate [57]. A fine mesh screen closed the bottom opening of the cavity and a removable microscope slide was used to cover the upper opening of the cavity. The slide and the acrylic plate were held together by a metal clamp. Six P. persimilis eggs, randomly withdrawn from the oviposition arenas, were transferred into each cage, together with six eggs of T. urticae to avoid cannibalism [58]. After ,48 h, the predator larvae hatched and after another 14 to 16 h they molted to protonymphs. The predators remained in the familiarization cages until all six individuals had reached the protonymphal stage, which is the first feeding life-stage. In experiment 2, familiarization took place on leaf arenas constructed similarly as the oviposition arenas. Each arena had an accessible size of 666 cm, was infested with mixed life-stages of T. urticae and furnished with 20 to 25 P. persimilis eggs randomly withdrawn from the oviposition arenas. The predators remained on the familiarization arenas for 8 to 10 days, i.e. until they had reached adulthood and were mated. Patch-exploitation and -leaving by Juvenile P. persimilis (Experiment 1) Experiment 1 aimed at examining the influence of social familiarity on patch-exploitation and -leaving of juvenile predatory mites under limited food on detached leaflets. Each experimental unit consisted of four interconnected detached trifoliate bean leaflets, placed on a foam cube (1561565 cm) covered with moist filter paper kept in a plastic box (2062065 cm) half-filled with water. Leaflets were arranged in a Y-shape, with a leaflet in the center, at the junction of the Y, surrounded by the other three leaflets, at the tips of the arms of the Y, and connected by wax bridges. Before the experiment, one to four adult spider mite females were placed on each arena for 24 h to feed and oviposit. Eggs were reduced to 20 on the central arena and to 6 on each of the three surrounding arenas (in total 38 eggs per experimental unit). The number of prey eggs provided should be sufficient for four juvenile predators to reach adulthood in the whole experimental unit of four leaflets, but be insufficient per each single leaflet [51], and thus stimulate movement among leaflets. After spider mite females had been removed, the wax bridges were constructed by dripping hot wax from a non-fragrant candle in between the leaflets [51,59]. To start the experiment, groups of four familiar or four unfamiliar protonymphs were released on the central leaf arena of each experimental unit. For each group, familiar protonymphs derived from the same familiarization cage, whereas unfamiliar protonymphs each derived from a different familiarization cage. Only protonymphs of cages where all six individuals were in the protonymphal stage were used for the experiment. After release of the protonymphs, the experimental units were observed after 3.5, 5. Patch-exploitation and -leaving by Gravid P. persimilis Females (Experiment 2) Experiment 2 aimed at examining the influence of social familiarity on patch-exploitation and -leaving of gravid P. persimilis females under limited food on whole plants. Before the experiment, five P. vulgaris plants were grown in 1 liter pots until the first trifoliate leaves were fully developed, which lasted ,2 to 3 weeks at 2561uC, 6065% RH and 16:8 h L:D photoperiod. All plant parts above the first trifoliate leaves were cut off. Each pot of five plants represented an experimental unit. In order to create homogeneous prey patches for P. persimilis within each group of five plants, four gravid spider mite females were placed on the middle leaflet of the trifoliate leaf of each of the five plants to lay eggs, resulting in one prey patch per plant. To confine the spider mites to this leaflet, an adhesive (RaupenleimH, Avenarius Agro) was applied around the petiole of the leaflet. After three days, the spider mite females were counted and removed. To ensure similar numbers of spider mite eggs across pots, only those pots, where at least two spider mite females were still present on each of the five middle leaflets, were used for the experiment. After removal of the spider mite females, hot wax from a non-fragrant candle was dropped on the glue to allow free movement of the predatory mites on the petiole. The experiment was started by releasing groups of either five familiar or five unfamiliar gravid P. persimilis females on the middle leaflet of the trifoliate leaf of one plant of each pot (i.e. five P. persimilis per group of five plants). Plant pots were randomly assigned to familiar and unfamiliar predator groups. For each group, familiar females derived from the same familiarization arena, unfamiliar females derived each from a different familiarization arena. After releasing the predators on the plants, the middle leaflets harboring the spider mite patches were examined after 24, 48 and 72 h. At each observation, the number of adult P. persimilis females present in each spider mite patch was recorded. At the end of the experiment (after 72 h), the number and lifestages of predatory mite offspring and the number of spider mite eggs and mobile juveniles were counted on each leaflet. Familiar and unfamiliar groups were replicated 31 times each. Statistical Analyses All statistical analyses were performed using SPSS 15.0.1 for Windows (SPSS Inc., Chicago, IL, USA, 2006). In experiment 1, separate generalized linear models (GLM) were used to analyze the effects of familiarity on the number of predators reaching adulthood (out of initially four; binomial distribution with identity link), the predator developmental time (averaged per experimental unit of four leaves; normal distribution with identity link function), the number of exuviae found on the outer three leaves (binomial distribution with identity link), the number of dead individuals (binomial distribution with logit link function) and the number of females reaching adulthood (out of the number of individuals with sex determined; binomial distribution with logit link function). To assess the effects of familiarity on the number of predators dispersed from the central leaf (i.e. found on the outer three leaves), moving predators (yes/ no) and on the number of prey eggs left per unit of four leaves (out of initially 38) over time, generalized estimating equations (GEE; binomial distribution with identity link function, autocorrelation structure between observation points) were used. To analyze the effects of familiarity on the predator dispersion index, defined as the number of leaves occupied by at least one mite per unit of four leaves, GEE (binomial distribution with logit link function; autocorrelation structure between observation points) was used. In GEEs, we selected the most parsimonious model with the lowest QIC value [60]. In experiment 2, two-sided Student's t-tests were used to compare the number of adult T. urticae females on day 0, the number of T. urticae eggs and juveniles after 72 h and the number of predator juveniles (eggs, larvae and protonymphs) between plant pots harboring familiar and unfamiliar predators after 72 h. Before t-tests, the numbers of spider mites and predatory mites present on the five different plants of a pot were lumped resulting in one number per pot. Generalized estimating equations (GEE; binomial distribution with logit link function; autocorrelation structure between observation points) were used to assess the effects of familiarity on the number of predatory mite females present and their dispersion index, i.e. the number of leaves out of maximal five occupied by the predators per pot, over time. To analyze the influence of familiarity on dispersion of the adult female and juvenile predatory mites between the origin leaf, i.e. the leaf where the predators were initially released, and the external leaves of each pot over time, GEEs (normal distribution with identity link function; autocorrelation structure between observation points and between origin and external leaves, respectively) were used. To this end, the average number of predators per external leaf was calculated before analysis. In GEEs, we selected the most parsimonious model with the lowest QIC value [60]. Patch-exploitation and -leaving by Juvenile Predatory Mites (Experiment 1) The number of individuals (mean 6 SE per group) reaching adulthood was similar in familiar (3.7060.16) and unfamiliar (3.5260.19) predator groups (Wald-x 2 1 = 0.991, P = 0.320). However, the proportion of females among those reaching adulthood (mean 6 SE per group) was higher in familiar (0.7860.06) than unfamiliar (0.5760.06) groups (Wald-x 2 1 = 4.739, P = 0.029). Familiar and unfamiliar predators needed similarly long to reach adulthood (h, mean 6 SE, familiar: 52.1960.53; unfamiliar: 50.6761.01; Wald-x 2 1 = 0.272, P = 0.602). Mortality (mean number of dead individuals 6 SE per group) was significantly lower in familiar (0.0460.04) than unfamiliar (0.2660.11) groups (Waldx 2 1 = 4.620, P = 0.032). The number of predators dispersed from the central leaf, i.e. those residing on the outer leaves, pooled across time was higher in familiar than unfamiliar groups and dispersal progressed differently over time. Familiar predators started to disperse earlier from the central leaf than unfamiliar predators did (Table 1, Figure 1). As a consequence, also the number of predator exuviae found per outer leaf (mean 6 SE) was higher in familiar (0.3560.03) than unfamiliar (0.2460.03) groups (Wald-x 2 1 = 5.039, P = 0.025). Familiarity did not affect the dispersion index (i.e. the number of leaves occupied by at least one mite per unit of four leaves) pooled across time, but the dispersion index of familiar and unfamiliar predators progressed differently over time (Table 1, Figure 2). The dispersion index trajectory of familiar predators reached a plateau already after 27.5 h, whereas that of unfamiliar predators increased more gradually. Familiarity had no effect on general activity (moving or stationary) pooled across time, but the activity trajectories of familiar and unfamiliar predators progressed differently over time (Table 1, Figure 3). Familiar predators were more active than unfamiliar predators in the first half of the experiment, whereas the opposite was true in the second half. Familiar predators fed more prey eggs pooled across time and reduced the number of prey eggs more quickly than unfamiliar predators did (Table 1, Figure 4). Patch-exploitation and -leaving by Gravid Predatory Mite Females (Experiment 2) The number of spider mites per pot of five plants (mean 6 SE) did not differ between pots assigned to the familiar (18.2960.36) and unfamiliar (18.6860.27) predator groups at the beginning of the experiment (Student's t-test, t 60 = 0.858, P = 0.394), indicating that initial prey availability was the same for familiar an unfamiliar predator groups. In contrast, at the end of the experiment (after 72 h), the number of juvenile spider mites (eggs, larvae, protonymphs) present per pot (mean 6 SE) was significantly lower in familiar (48.3763.30) than unfamiliar (61.6963.33) predator groups (t 60 = 2.842, P = 0.006). The total number of predator offspring (eggs, larvae and protonymphs) produced per pot (mean 6 SE) did not differ between familiar (8.9762.44) and unfamiliar (8.6562.41) predator groups (t 60 = 20.534, P = 0.595). Familiarity had no influence on the number of predator females present per plant pot pooled across time but numerical presence of females progressed differently over time, with unfamiliar females leaving the plant groups somewhat earlier than familiar females did (Table 2, Figure 5). Pooled across time, familiar predator females did not occupy more leaves than unfamiliar predator females did. However, the number of leaves occupied by familiar predator females was higher after 24 h and thereafter decreased more steeply over time than the number of leaves occupied by unfamiliar predator females (Table 2, Figure 5). Familiarity had no main effect on dispersion of the predator females between the origin leaf and the external leaves. In both familiar and unfamiliar groups, each external leaf harbored more predator females than the origin leaf. However, the interaction between familiarity and leaf position (origin vs. external) indicates that dispersion of familiar females was more strongly biased towards external leaves than that of unfamiliar females (Table 2, Figure 6). Moreover, while in both groups the number of predator females on the origin leaf, where they were initially released, decreased over time and the number on the external leaves increased over time, familiar predators moved earlier from the origin leaf to the external leaves than unfamiliar predators did. Similar to dispersion of the adult predator females, the dispersion of predator offspring was less strongly biased towards the origin leaf in familiar than unfamiliar groups, indicated by the interaction between familiarity and leaf position (Table 2, Figure 7). Discussion Social familiarity had a decisive impact on spider mite patch exploitation, dispersal and inter-patch distribution of juvenile and adult predatory mites, P. persimilis, under limited prey availability. The major trends were similar in experiments with juvenile and gravid female predators, despite using different set-ups and spatial scales -prey patches on interconnected detached leaflets and whole plants. At similar age, familiar juvenile predators dispersed earlier from a limited spider mite patch to adjacent patches than unfamiliar juvenile predators did. Likewise, familiar gravid predator females dispersed earlier from the release prey patch to external patches than unfamiliar females did. Both juvenile and adult familiar predators depleted the spider mite prey more quickly than unfamiliar predators did. In experiment 1, the adaptive significance of social familiarity under the conditions tested, i.e. limited prey distributed among several patches, was apparent in enhanced survival of juveniles and a higher proportion of females reaching adulthood. Exploratory Behavior and Dispersal In experiment 1, all unfamiliar individuals stayed in the release prey patch until molting to the deutonymphal stage, whereas most familiar individuals reached the deutonymphal stage in the patches on the outer leaves, indicating earlier dispersal by familiar juveniles. Experiment 2 revealed a similar tendency of early dispersal in adult predator females. Familiar females left the origin leaf, i.e. the leaf where they were released, earlier than unfamiliar females did. Consequently, the number of predator females decreased on the origin leaf and increased on the external leaves more strongly over time in groups of familiar than unfamiliar females. In P. persimilis, two genetically determined types of dispersers have been proposed: leaving before complete prey depletion (''milkers'') and staying until prey is depleted and possibly engaging in cannibalism (''killers'') [61]. Linked to a possible genetic pre-determination, dispersal is extensively modulated by environmental factors [51]. In light of the milker-killer theory, in our experiments familiar females behaved more milkerlike than unfamiliar females did, indicated by earlier leaving from the initial prey patch and searching for new prey patches instead of staying and trying to enhance their survival chances and/or advance their developmental stage by cannibalism. The differing patch exploitation and dispersal strategies of familiar and unfamiliar mites are also reflected in the inverse relation of dispersing (familiar more than unfamiliar) and dying (unfamiliar more than familiar) individuals. In experiment 2, familiar females left fewer offspring but more prey per offspring on the origin leaf, which should be advantageous for offspring development on this leaf [51,62]. An alternative or additional explanation for earlier dispersal by familiar predators in both experiments may be increased boldness. Boldness, the willingness to accept a higher degree of risk in return for potentially higher foraging or reproductive gains, has been found to correlate with familiar environments [43]. Social familiarity increased boldness in guppies [44] and chicks [42], where socially familiar individuals were more exploratory in foraging. In line with these findings, we suggest that in our experiments familiar predators were more prone to explore the surroundings, and thus left the release sites earlier than unfamiliar predators did. Inter-patch Dispersion, Prey Exploitation and Competition In both experiments, social familiarity had a significant influence on spider mite exploitation and prey intake rates. Social familiarity increasing food intake rates has been similarly reported for other animals. For example, red-backed salamanders, Plethodon cinereus, had a lower foraging rate in the presence of unfamiliar conspecifics due to spending more time avoiding or interacting with these more aggressive individuals [63]. Likewise, social familiarity led to decreased aggression and higher food intake rates in sea trout, Salmo trutta [35]. Interestingly, under ample prey supply, social familiarity had a different influence on P. persimilis: juvenile predators living in familiar groups needed less prey during development than those living in unfamiliar groups [34], and females had similar predation rates within familiar and unfamiliar groups (Strodl and Schausberger, unpublished data). It thus seems that the influence of social familiarity on prey intake changes with the possibility to choose between and move to other patches. Proximately, the higher food intake rates of familiar predators are tightly linked to optimized dispersal from the origin patch and Table 1. Results of generalized estimating equations (GEE; autocorrelation structure between observation points) for the effects of familiarity and time nested within familiarity on the number of dispersed juvenile predators (i.e. found on the outer three leaflets), their dispersion index (number of leaves occupied), activity (moving yes/no) and number of prey eggs left per experimental unit of four leaflets. inter-patch distribution. Social familiarity led to a better coordination among group members in these inter-related behaviors. With patchily distributed prey, predators are constantly faced with decisions to stay or leave a given patch [64]. Individuals of familiar groups tending to early leave the origin patch with decreasing quality and colonize other resource-rich patches, could thus be interpreted to be more ideally and freely distributed than individuals of unfamiliar groups [8,65]. Group-living and Exploitation Competition In addition to relaxing prey competition due to earlier dispersal and more favorable inter-patch distribution, social familiarity seems to have shifted the type of competition from contest to scramble [66,67]. In scramble competition, all competitors obtain about the same share of the resources, whereas in contest competition the resources are unequally partitioned among the competitors. For example, Utne-Palm and Hart [41] assessed the effects of familiarity on competitive interactions in sticklebacks. Familiarity decreased the aggressive behaviors leading to lower intraspecific competition and a more balanced food distribution among familiar individuals. We did not particularly assess competition but this may in a similar way apply to our experiments. Unfamiliar predators were less dispersed among patches and exploited them more slowly. Being less dispersed likely intensified exploitation competition and interference in the origin patch and in this way retarded depletion of the prey available on the whole experimental unit by unfamiliar P. persimilis. This conclusion is further supported by the higher mortality rates of unfamiliar groups. In contrast, familiar predatory mites dispersed earlier and resource (food and space) exploitation took place in a more balanced way. Thus, on the whole experimental unit, familiar juveniles behaved more scramble competitor-like than unfamiliar juveniles did. Group-living, Dispersal and Reciprocity Under the assumption that premature leaving from a relatively safe site and group initially bears more costs than benefits, such a behavior may be selected for in group-living animals and be adaptively advantageous if the initial costs are later more than compensated for by other group members, i.e. reciprocated [68]. Table 2. Results of generalized estimating equations (GEE; autocorrelation structure between observation points) for the effects of familiarity, leaf (origin or external; only for dispersion) and time nested within familiarity (only for females present, leaves occupied and dispersion by females) on the number of P. persimilis females present, the number of leaves occupied by P. persimilis females and dispersion by P. persimilis females and their offspring (after 72 h) between the origin and external leaves. Premature dispersal could then be considered a form of cooperation independent of genetic relatedness. For example, sticklebacks preferred to join individuals that had been cooperative in the past [37]. Individual recognition is a prerequisite for reciprocity between unrelated individuals and this prerequisite is met in P. persimilis. Individuals are able to recognize conspecific individuals, which they previously encountered early in life, and treat those familiar individuals, independent of genetic relatedness, more favorably (Strodl and Schausberger, unpublished data and [56]). While the early leavers would pay the costs of premature leaving, the ones that stay would benefit from decreased withinpatch competition. The costs can be repaid at a later date when the early leavers and those that stayed in the origin patch meet again in a different patch and the later arrivals take their turns at costly early leaving from this new patch. Implications to Biological Control The knowledge gained in our study could be used to optimize the use of P. persimilis as bio-control agents of spider mites or incorporate them in mathematical models to better predict their performance in pest suppression. For example, a common procedure in cultures of the European catfish, Silurus glanis, is fish grading, i.e. assorting them in size or quality classes, which usually leads to interactions of individuals that had no previous contact. Grouping familiar fish resulted in reduced conspecific aggression and enhanced energy usage, growth and survival [69]. Similarly, avoiding to release predatory mites from different origins in one and the same crop or trying to package mites with a high likelihood of familiarity in the same bins, could optimize the predator's performance in spider mite suppression.
2018-04-03T05:53:07.297Z
2012-08-10T00:00:00.000
{ "year": 2012, "sha1": "b8771827640a613a143e56570b71a683775055f0", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0042889&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b8771827640a613a143e56570b71a683775055f0", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
146995150
pes2o/s2orc
v3-fos-license
How Content Analysis may Complement and Extend the Insights of Discourse Analysis Although discourse analysis is a well-established qualitative research methodology, little attention has been paid to how discourse analysis may be enhanced through careful supplementation with the quantification allowed in content analysis. In this article, we report on a research study that involved the use of both Foucauldian discourse analysis (FDA) and directed content analysis based on social constructionist theory and our qualitative research findings. The research focused on the discourses deployed, and the ways in which women were discursively positioned, in relation to abortion in 300 newspaper articles, published in 25 national and regional South African newspapers over 28 years, from 1978 to 2005. While the FDA was able to illuminate the constitutive network of power relations constructing women as subjects of a particular kind, questions emerged that were beyond the scope of the FDA. These questions concerned understanding the relative weightings of various discourses and tracing historical changes in the deployment of these discourses. In this article, we show how the decision to combine FDA and content analysis affected our sampling methodology. Using specific examples, we illustrate the contribution of the FDA to the study. Then, we indicate how subject positioning formed the link between the FDA and the content analysis. Drawing on the same examples, we demonstrate how the content analysis supplemented the FDA through tracking changes over time and providing empirical evidence of the extent to which subject positionings were deployed. Discourse analysis is a well-established qualitative research methodology that is used in a range of disciplines. Although there are a diversity of approaches within discourse analysis (including linguistic, ethnomethodological, semiotic, Althusserian, Gramscian, social constructionist, psychoanalytic, and poststructuralist variations), the commonalities underpinning these various methods center on the significance of language in structuring and constraining meaning and their employment of interpretive, reflexive styles of analysis (Jørgensen & Phillips, 2002). What has received little attention in the methodological discussions concerning, and applications of, discourse analysis is how it may be enhanced through careful supplementation with the quantification allowed by content analysis. Jørgensen and Phillips (2002), in their review of discourse analysis, indicate that ''it is possible to create one's own package by combining elements from different discourse analytical perspectives and, if appropriate, nondiscourse analytical perspectives' ' (p. 4). How this kind of multiperspectival approach may be achieved, in particular in mixing quantitative analysis with the qualitative elements of discourse analysis, has not received, to our knowledge, any systematic attention in recent qualitative or discourse analysis methodological books. For example, The Sage Handbook of Innovation in Social Research (Williams & Vogt, 2011), The Sage Handbook of Qualitative Research in Psychology (Willig & Stainton-Rogers, 2008), and Discourse Analysis: A resource book for students (Jones, 2012) do not address such application. In this article, we report on a research study that involved the use of both Foucauldian discourse analysis (FDA) and a directed content analysis based on social constructionist theory and our qualitative research findings. We found that while the discourse analysis was able to answer the questions about the constitutive network of power relations constructing women as subjects of a particular kind, there were questions that emerged which were beyond the scope of a discourse analysis. These questions arose from our wanting, first, to understand the relative weightings of the deployment of particular subject positions in the data set and, second, to trace historical changes in the deployment of these subject positionings. The underlying logic for the mixing of the methods in our research study was, thus, to complement the qualitative analysis with quantitative analysis to yield a comprehensive understanding within and across the data set (as suggested by Creswell, Fetters, & Ivankova, 2004) and to allow for the multifaceted and historically contingent character of the phenomenon under study to be revealed (Greene, 2008). Importantly though, a feminist, poststructuralist theoretical grounding underpinned both analyses. Following on from Mertens (2007), our intention was to utilize mixed methods research approaches originating from a transformative paradigm so as to allow a deeper understanding of the role of power differentials in the construction of women as subjects of articles written about abortion. The research focused on the ways in which abortion was constructed, and women were discursively named and positioned in relation to abortion in newspaper articles, published in 8 weekly national and 17 daily regional South African newspapers over 28 years, from 1978 to 2005. The explication of the discursive power relations within these media representations achieved two aims. First, by examining newspaper articles published over nearly three decades, the sheer variety of discourses drawn upon to construct abortion and the subject positions made available by those ways of speaking about abortion were made apparent. Second, the fluid, multiple, nuanced, and contradictory identities as constructed through discourse became visible. Some of these data and the manner in which we combined discursive and content analyses can be viewed in Macleod and Feltham-King (2012) and Feltham-King and Macleod (2015). In this article, we draw on examples from Feltham-King (2010) to illustrate the methodological points that we make. In order to orient readers, we provide, initially, some background in terms of the context within which our research was conducted. Then, we discuss sampling questions. The mixing of content analysis with discourse analysis meant that we had to refine the manner in which we sampled the data. This is followed with a discussion, using examples from our data, of the usefulness of the FDA in answering particular questions. Next, we show how the content analytic method that we used complemented the discourse analysis by providing insights into the variability of use of particular subject positionings over time and the relative weightings of use of these subject positions. Context The transition from the oppressive, racially based Apartheid system to democracy started in South Africa in 1990, with the unbanning of previously banned political parties, the release of political prisoners (most famously Nelson Mandela), and the beginning of negotiations that led to the first democratic elections in 1994. A number of sociopolitical changes have been implemented at the demise of Apartheid, with the post-Apartheid government systematically setting about reversing the Apartheid era, racialized and gendered legislation and policies that ensured that all Black people, and in particular working-class and rural Black people and Black women, were discriminated against (Ngwena, 2004). With respect to our research study, changes in abortion legislation and in the newsprint media are the most pertinent social contexts within which our data were generated. South Africa's Abortion and Sterilisation Act (ASA; Act No. 2 of 1975) was introduced by the Apartheid regime as an extremely restrictive political tool that served to encourage unsafe abortion among the majority of women. The legislation was differentially applied. Black women, who comprised 87% of the population, had very limited access to state-funded medical and legal services and virtually no economic resources to access private health care. In practice, the ASA resulted in high mortality and morbidity rates for Black women. White women, who had the economic resources to fight the bureaucratic system, received the vast majority of legal abortions during the Apartheid years (Cope, 1993). The radical transformation of the abortion legislation happened as part of the broader democratization process initiated in 1990. The Choice on Termination of Pregnancy Act (CTOP; Act No. 92 of 1996), one of the most liberal globally, allows for women (including minors) to request (without parental consent) an abortion in the first 12 weeks of gestation. Thereafter, a medical practitioner must recommend abortion under specified (relatively open) conditions. The implementation of the CTOP Act has provided evidence that the introduction of liberal abortion laws drastically curtails the number of deaths owing to unsafe illegal abortion (Klugman & Varkey, 2001). Despite problems around the delivery of abortion services (Guttmacher, Kapadia, Naude, & de Pinho, 1998) and three court challenges by antiabortion groups, this liberal abortion legislation is still in place. Turning to the media, this was strictly controlled under the Apartheid government, which used states of emergency, warnings to newspapers, and the Bureau of Information to restrict press freedom (Tomaselli & Louw, 1989). The transition to democracy saw not only the lifting of press restrictions but also diversification of commercial publications. While ownership patterns have begun to change to reflect the demographics of South Africa, the initiation of a democratic media culture has not been without conflict (Botma, 2011;Mabote, 1998). Class, race, and gender inequalities continue to play themselves out in the media as the transformation process is ongoing (Berger, 2001). Sampling In order to ensure that we were able to perform the kind of integrative work that we envisaged, we had to begin the conceptualization of our research study with important sampling decisions. In these decisions, we needed to keep the requirements of both the discourse analysis and the content analysis in mind in order to ensure the integrity of the data analysis at a later stage. Our approach to sampling eschewed the all-or-nothing, probability/nonprobability sampling binary often used simplistically to distinguish the differences between qualitative and quantitative sampling decisions. We used a multistage purposeful stratified random sampling strategy (Collins, Onwuegbuzie, & Jiao, 2007). Thus, we combined purposive sampling, often employed in qualitative research, with random and stratified sampling, most often used in quantitative research. This sampling method, although relatively complicated, allowed us to fulfill the premises of sampling for qualitative research-sampling for richness of data-and those of quantitative research-sampling to ensure a reasonable level of representativeness. Our sampling frame was the South African media archives housed at the University of the Free State (http://www.samedia.uovs.ac.za), which is the biggest noncommercial presscutting archive in South Africa consisting of more than 3 million print media articles that have been electronically indexed since 1978. From this frame, we first used purposeful sampling by selecting 25 (17 daily regional and 8 weekly national) newspapers that fulfilled the criterion of having published 30 or more articles about abortion over the 28 years. The purpose in using this criterion was to ensure that we included newspapers that had engaged substantially with, and made a reasonable contribution to, the debate around abortion. These 25 publications were all on the list of 37 major urban commercial newspapers in South Africa as certified by the Audit Bureau of Circulations (www.abc.org.za). This sample was further refined through multistage stratified and random sampling. The 28-year period was stratified into seven 4-year epochs. Using the individual newspapers and the epochs as axes on a grid (25 Â 7), we then randomly sampled 10% of articles in each of the 175 cells (which were rounded up or down within a range of 10 units). This resulted in a sample of 300 articles. This kind of multistage purposeful stratified random sampling enabled us to feel confident about the richness of our data and to perform the quantitative comparisons across time referred to in the next section. The Usefulness of an FDA Approach in Relation to this Study Social constructionist approaches to discourse analysis highlight the role of language in constructing reality and the manner in which discourses provide space for particular subject positions. Despite these commonalities, there is significant debate concerning underpinning theoretical resources and analytical focus (e.g., the everyday discursive practices focused on by discursive psychology vs. a more overarching abstract mapping of discourses operating in society performed by Laclau and Mouffe; Jørgensen & Phillips, 2002). The feminist poststructural theoretical framework adopted in this research lent itself to a FDA that explicates how culturally located discourses, positioning strategies, and practices are intricately interweaved with, and serve to reproduce or to undermine, particular power relations (Arribas-Ayllon & Walkerdine, 2008;Parker, 1992Parker, , 2005Willig, 2008). Drawing on Foucauldian understandings of discourse (Foucault, 1972), we conceptualized discourse as a system of statements and practices that are constitutive of the objects and subjects of which they speak. In conducting the discourse analysis of the newspaper text, we utilized the criteria for distinguishing discourses suggested by Parker (1992Parker ( , 2005, vis-a-vis that discourses are realized in text, are about objects, contain subjects, are coherent systems of meaning, refer to other discourses, reflect on their own way of speaking, and are historically located. One of our aims in relation to our research study was to identify the discourses deployed in relation to women who present for abortion. Several contradictory discourses were evident across the data set. In the following, by way of example, we highlight two: a discourse of autonomy and a discourse of victimhood, indicating how these discourses were put to work in the politics of abortion. We take both of these up later in discussing how the content analytic aspect of the project enriched the discursive analysis. A discourse of autonomy, in which people are seen as either potentially capable (potentially autonomous) of or actually enacting self-determination and independent action (autonomous), has underpinned many of the demands made by social movements in a range of spaces (Böhm, Dinerstein, & Spicer, 2010). This discourse was taken up in the data set in combination with a discourse of reproductive rights and to emphasize actions that lead to empowerment, as illustrated in the following extracts: [1] Women want the right to take responsibility for their lives. They should be given the right to choose. (McGibbon, 1992, p. 17) [2] Women's right to choose has to be seen in the context of the interim Constitution, the Bill of Rights, and the Government of National Unity's commitment to a nonsexist society and the empowerment of women. (''Probe team in favour of easier abortion laws, '' 1995, p. 3) [3] It was virtually impossible to make a professional decision to the severity and enduring impact [of abortion] on a young girl or adult woman. Women in the context of counselling should be enabled to make the decision for themselves. (Peacock, 1994, p. 9) [4] The Reproductive Rights Campaign has been launched by a group of concerned women to fight for women to gain control over their own bodies and over reproduction. (''Reproductive rights campaign, '' 1995, p. 30) In the aforementioned extracts, we see how women are depicted as autonomous: They are able to take responsibility, to choose, to make decisions around abortions, to fight, and to gain control over their bodies and reproduction. This discourse is supported by, and interweaves with, a discourse of reproductive rights. The ''right to choose,'' referred to in both Extracts 1 and 2, has underpinned much activism around the liberalization of abortion legislation and assumes a level of autonomy and capacity on the part of the woman. In contrast to a discourse of autonomy, a discourse of ''victimhood'' positions people as disadvantaged, traumatized, and/or maltreated through a set of circumstances beyond their control (Hoijer, 2004). As indicated by Jeffrey and Candea (2006), a discourse of victimhood appeals to something ''nonagentive'' such as poverty and ''poses itself as the neutral or indisputable starting point from which discussion, debates, and action . . . can and must proceed'' (p. 289). A discourse of victimhood was deployed in our data set to highlight how women were victimized through abortion legislation and unfair circumstances surrounding abortion such as poverty, stigma, and lack of access to health care facilities. This way of talking about women often utilized words that were emotive and evocative, intended to evoke sympathy and compassion for the woman in that situation. [5] Mothers of unwanted children-and the children themselves-suffer and women deserve the right to terminate a pregnancy safely. (''Call for abortion reform, '' 1978, p. 9) [6] When a pregnant woman believes her only option is to have an abortion she will stop at nothing to do so, even if it kills her. They will risk their lives and die, often leaving young children behind. I feel compassionate towards these women but also angry that they are forced to mutilate themselves when if abortion was legal they could undergo a simple procedure with no fear. (Krost, 1995, p. 9) [7] Young women were marginalized by age, gender and race. Their lack of status in the family, in schools and training institutions and in society at large ensures that they bear the brunt of social ills that face our communities. (Seale, 1995, p. 5) [8] Women in rural areas will be worst hit by the lack of resources, with many having to travel 250 km or more to the closest facility offering terminations. (Rickard, 1997, p. 5) [9] She [Dr Naude] said a large body of evidence showed that abortion was detrimental to the health of mothers. (''When nurses have to do their jobs '', 2002, p. 11) [10] Women, who have abortions, quickly learn that it is not as safe and easy as proabortionists would have us believe. Instead, abortion is dangerous to both the physical and mental health of women even if done under clinical conditions. Finally, instead of being a huge step forward for women's rights, abortion on demand is the most destructive manifestation of discrimination against women. Proabortionists are trying to sell abortion to women under false premises and are taking advantage of the ignorance and vulnerability of women. (''Women will suffer, '' 1996, p. 26) [11] Women who have had abortions up to 30 years ago and have married and had children in the interim, start thinking that they are mad until they realise that there is a name for what they have been struggling with for years. Kotze said that some of the women have no respect for their bodies after an abortion and will sleep around, get pregnant and simply undergo more abortions. (Fourie, 2004, p. 11) [12] We should remain aware of social realities and have compassion for the many women who are victims of tragic circumstances. (Auerbach, 1996, p. 11) Women are described in these extracts in ways that emphasize victimhood: they suffer, have no options, unwillingly engage in self-mutilation, lack status and resources, are marginalized, bear the brunt of social ills, suffer physical and mental health consequences, have their ignorance and vulnerability taken advantage of, or go mad. Extremist language and emotive words are used throughout. These women are described as being so desperate that they are willing to die, so tarnished or ruined that they will never recover and so without hope that they cannot be redeemed. By being positioned as victims, the women's lack of agency and ability to overcome the many negative consequences of abortion, ranging from guilt to death, is emphasized. What is clear from this range of extracts is that a discourse of victimhood can be deployed in very different ways. In Extracts 5 and 6, women are seen as victims of unfair legislation that denies them the autonomy referred to earlier. In Extracts 7 and 8, we see how gradations of victimhood are constructed: Young women and women who live in rural areas are singled out for victim status (the latter in the context of liberal abortion legislation, but where services are less than optimal). While the discourse of victimhood was deployed in much of the public rhetoric that supported the liberalization of the legislation prior to the CTOP Act, and thereafter to highlight the poor roll-out of services (as in Extracts 5, 6, and 8), we see in Extracts 9, 10, and 11 how a discourse of victimhood can equally well be deployed to underpin antiabortion arguments: Women who undergo abortions will be the victims of poor mental and physical health; they are victims of proabortionists who provide them with services but misuse their ignorant and vulnerable state. Parker's (1992) additional criteria in identifying discourses (that discourses support institutions, reproduce power relations, and have ideological effects) speak to the deconstructive aspect of the FDA that we used. This aspect of the analysis involved, inter alia, analyzing the assumptions on which the deployment of a particular discourse draws, what gains are made in such deployment, and the implications thereof (Macleod, 2002). Thus, for example, a discourse of autonomy draws off the fundamental liberal-political notion of individual rights and an understanding that women are competent and able to make independent decisions concerning their reproductive health. It is this discourse, complemented by the dual notions of ''choice'' and ''rights,'' that has been the cornerstone of mainstream Western feminist advocacy around access to abortion (Ferree, 2003). It and a plea to reproductive public health issues formed the main arguments for the liberalization of abortion legislation during the democratization process in South Africa (Klugman & Varkey 2001). Although a discourse of autonomy is firmly entrenched in, at least liberal, feminist advocacy for abortion, it is not without its difficulties. The assumption of active agency on the part of women seeking abortions belies the power relations within which choices are made. This might lead to a lack of examination of ''the social context and conditions needed in order for someone to have and exercise rights'' (Fried, 2006, p. 240) and a failure to address power relations within which responsibility for pregnancy and children are assigned. The discourse of victimhood, on the other hand, foregrounds social context, highlighting how women suffer as a result of circumstances. It is complemented by and draws off a discourse of protectionism, in which there is a social contract to identify, and to help, those most in need of help. Thus, a discourse of victimhood is, as indicated earlier, inevitably paired with a call for some appropriate response. As seen in the extracts presented earlier, victimhood requires a basic level of compassion (Extracts 6 and 12). But it is also suggestive of fundamental action, like liberalizing abortion law (Extracts 5 and 6), improving services (Extract 8), and restricting abortion (Extracts 10 and 11). These calls to action are buttressed by the shock and horror that we ought to experience at women's victim status. As the inverse of discourse of autonomy, the discourse of victimhood likewise has a mixed reception among feminists. On the one hand, the manner in which victimhood deprives women of agency and renders them into the grateful recipients of benevolent (often patriarchal) assistance has been problematized (McKenzie-Mohr & Lafrance, 2011). On the other hand, a denial of victimhood is seen as a failure to acknowledge gender inequities and the experience of disadvantage and discrimination (Baker, 2010). Feminists have grappled for some time now with the bifurcation that autonomy and victimhood present, attempting to find ways to nuance and trouble these homogenizing ways of viewing women (Schneider, 1993). Supplementing Discourse Analysis With Content Analysis: Positioning as the Link The discourses that we analyzed in the data set were characterized by variability, contradiction, and tension. The flexibility with which they could be used in different contexts and at different times is the very feature that interested us in our analysis. This gave rise to the questions concerning the extent to which various discourses were deployed across the data set and changes in usage over time. It was to answer these questions that we supplemented discourse analysis with content analysis. We did this by homing in on a specific criterion in Parker's (1992Parker's ( , 2005 list of criteria for identifying discourses: that discourses contain subjects. As Parker (1992) points out, a discourse allows space for a certain type of self-''it addresses us in a particular way'' (p. 9). This has been referred to as subject positioning that ''constitutes waysof-being through placing a subject within a network of meanings and social relations which facilitate as well as constrain what can be thought, said and done by someone so positioned'' (Willig, 2000, p. 557). Davies and Harré (1990) use the terms interactive and reflexive positioning to indicate processes, whereby subjects are positioned by others and by themselves, respectively. Each description of women presenting (or potentially presenting) for abortion constructs a particular subject position for these women, simultaneously allowing and constraining particular ways-of-being within a particular system of power/knowledge (Foucault, 1980). As such, women were interactively positioned in the newspaper articles within various discourses relating to abortion. We used this aspect of discourse analysis, vis-à-vis subject positioning, as the bridge to conducting the content analysis. Subject positioning has been proposed as providing a bridge between discourse analysis and conversation analysis (Wetherell, 1998), but we argue that it serves as well in integrating FDA and social constructionist content analysis. Usefulness of Content Analysis The basic assumption of the content analysis that we used in our research is very different compared to those underpinning a traditional content analysis. Rather than using quantification in an attempt to show up similarities among predetermined categories (conceptualized as fixed, stable, and objectively verifiable), the quantification in this directed content analysis was utilized to track the multiplicity, variety, instability, and historical contingency of the discursive constructions over almost three decades. Following Hsieh and Shannon (2005) who indicate that directed content analysis proceeds with relevant research findings as guidance for initial codes, we used our FDA as the basis for the conceptualization of the codes. By utilizing a content analysis to quantify the ways in which discursive constructions positioned women, the focus shifted from understanding these positions qualitatively to revealing the shifts, changes, and the pervasiveness of particular positions. This mapping could only commence once the qualitative coding was complete. Thus, for example, having identified a discourse of autonomy and a discourse of victimhood, we started the process of coding each article for the presence or absence of the victim subject position and the autonomous or potentially autonomous subject positions. A crucial requirement for content analysis is that the categories are sufficiently precise, and mutually exclusive, to enable different coders to arrive at the same results when the same body of material is examined. Thus, we created clear descriptions of what these subject positions entailed and included brief examples from the extracts: In the victim position, girls and women are positioned as in need of protection from circumstances, predators and abusers. Those talking about women in this way are thus implying that women do not have power or resources to protect themselves. They are described as vulnerable and at risk or as individuals who have suffered unjustly owing to circumstances beyond their control. Often a sensationalist style of writing is used with a liberal use of emotive words to evoke sympathy in readers. Examples of such words are: ''desperate'', ''degrading'', ''humiliating'' or ''painful''. Positioning women in this way suggests a need for sympathy, the need to empower women or the need for powerful protectors. This can be seen in the extract [5]: [5] Mothers of unwanted children-and the children themselves-suffer and women deserve the right to terminate a pregnancy safely. (''Call for abortion reform, '' 1978, p. 9) The autonomous position constructs women as full citizens of South Africa. She is not under the authority of any person or institution and is capable of mature decision-making about her reproductive choices. This positioning refers to talk which is based on an ideal and not actual lived experience. As such, the autonomous woman experiences no class, educational or gender disparities and is protected by law against abuse and considered economically productive and valuable. Such a woman has a right to safe, reproductive healthcare. Such a positioning brings the woman as a new rights-bearing South African citizen into being, as shown in extract [2]. [2] Women's right to choose has to be seen in the context of the interim Constitution, the Bill of Rights and the Government of National Unity's commitment to a non-sexist society and the empowerment of women. (''Probe team in favour of easier abortion laws '', 1995, p. 3) The potentially autonomous position is similar to the autonomous position. The difference is that those utilising the potentially autonomous position do so in an attempt to resist current circumstances that render women potentially powerless. They question the assumption that women should accept their powerlessness. The intention of positioning women in this way is to propose an imagined ideal in which women may experience autonomy. For this reason this way of talking about women utilises the future tense, as shown in the duplication of extract [1]. [1] Women want the right to take responsibility for their lives. They should be given the right to choose. (McGibbon, 1992, p. 17) Because the frequency of the occurrence of categories across the data set is calculated in content analysis, the question of reliability is raised. This is usually assessed through intercoder reliability. Neuendorf (2011) points out that, ideally, two subsamples should be selected for intercoder reliability in any given content analysis when human coding is employed, namely, a pilot reliability test (conducted prior to commencement of coding the sample) and additional independent coding of a subsample of the data (conducted once the coding process is complete). In the case of our project, two reliability subsamples were used: first, the pilot study (consisting of 10% of the sample) during which the coding scheme was refined and a revised codebook constructed, and, second, the independent coding of the whole sample for the absence or presence of the three positions. Lombard, Snyder-Duch, and Bracken (2002) recommend that an acceptable level of intercoder percent agreement be selected upfront. In our case, we settled on 90%. In the case of the data presented in the following sections, the percent agreement was 94%. In addition, we calculated Scott's p, which is suitable for calculating intercoder reliability coefficients between two coders (Lombard, Snyder-Duch, & Bracken, 2002). The Scott's p coefficient was calculated at 0.88, which is recognized as acceptable in most contexts. As indicated earlier, one of the limitations of discourse analysis is its inability to ascertain the extent to which a particular discourse is being deployed, beyond a general sense obtained by the researcher in reading the data. Following the discursive analysis, we coded each newspaper article for the presence or absence of subject positions suggested by the discourses identified. Thus, in line with the examples presented earlier, we found that women were positioned as potentially autonomous or autonomous in 12.6% and 17.3%, respectively, of all articles in the data set. In contrast, the victim position was deployed in 41.6% of the articles. Additionally, we were interested in changes over time. Some of the analyses traced changes across the seven 4-year epochs referred to earlier (see e.g., in Feltham-King, 2010), whereas others took a broader sweep, analyzing the deployment of subject positions in two significant historical periods, vis-à-vis the Apartheid period prior to 1990 and the transition to, and actualization of, democracy post-1990. Table 1 demonstrates the prior-/post-1990 analysis for the potentially autonomous, autonomous, and victim subject positions. This table illustrates how consistency or change in the deployment of a subject position may be tracked. We note the relative consistency in the use of the potentially autonomous subject position and the stark change in the deployment of the autonomous subject position over the two historically significant periods. The victim position is deployed in more than one third of the articles prior to 1990, with an increase being noted post-1990. These illustrations demonstrate how our supplementation of FDA with social constructionist content analysis allowed us to make analytical points that otherwise might have been missed. We were able to show how the victim subject position outweighs the potentially autonomous and autonomous subject position. Given the different implications of the discourses of autonomy and victimhood as highlighted earlier, this relative weighting of the positioning of women in the data set with respect to these opposing discourses allows us to understand the politics, and pitfalls, of public representations of abortion in South Africa. The absence of the autonomous position during the conservative, pronatalist, and nationalist politics of the Apartheid regime speaks to the historical locatedness of discursive resources. While the ASA (1975) was enforced, autonomy for women could only be expressed as a possibility (as seen in 13.8% of articles published then using the potentially autonomous position). Even after the democratization process was underway and the issues of racial and gender transformation and equity were foregrounded, the fact that the victim position is used almost twice as often as is the autonomous position has implications for the politics of abortion. Although, as indicated earlier, neither of these positionings is unproblematic from a poststructuralist feminist perspective, the emphasis on victimhood in both conservative arguments to restrict abortion legislation and in progressive arguments for better roll-out of services is in need of careful inspection, debate, and nuancing. Conclusion As indicated at the beginning of this article, a feminist poststructuralist approach underpinned all our analytical work in this research study. The combination of FDA and content analysis allowed us to address different issues in the overall analysis of the data and, thus, deepened our analysis of the material, enabling both a critical engagement not only with the discourses being deployed but also with their variability and contingency over time. The theoretically driven but pragmatic approach that we took in our research speaks to the compatibility thesis, which argues against the notion that paradigms of qualitative and quantitative research are inherently incompatible. Compatibility, argue Karasz and Singelis (2009), should be judged by whether the utilization of a mixed method is appropriate for the research question being posed. Furthermore, the underlying theoretical framework should be consistent. Had we conducted only discourse analysis, our research questions would have been limited to discourses and subject positions appearing in the data over the 28-year period. Supplementing this analysis with a content analysis of the extent to which the subject positions were deployed in the data set and how these changed over time added depth to the analysis and allowed us to speak to the variability of discursive constructions over a period of time and within specific sociohistorical contexts. Our method must be distinguished from Foucault's genealogical method (Foucault, 1977). As a genealogy is a history of the present, tracing back particular discourses and practices to the conditions of possibility of their emergence, it envisages a less linear historical approach than our mixed method. We did not identify particular issues in the present on which we wished to perform a genealogical study. Instead, we left the field open, tracing in a linear fashion the range of discourses emerging across the data set and the changes in discourses over time. Note. 33% of these cells had an expected count of less than five; therefore, the chi-square statistic to establish exact significance was not meaningful.
2019-05-08T13:32:06.622Z
2016-02-29T00:00:00.000
{ "year": 2016, "sha1": "83c315d90dae5921f66af505a5950bb27d6dd521", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1609406915624575", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "fb946a3545796b0fa3b9326f0c6f26984614b858", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
212873837
pes2o/s2orc
v3-fos-license
Diagnostic possibilities of a method of content analysis for therapy of women exhibiting codependency The article presents the results of a comprehensive study of psychological characteristics of the manifestation of co-dependent behavior of women. The novelty of the study lies in the development of psychodiagnostics criteria for identifying co-dependent behavior using the content analysis method, using the Thematic Apperceptive Test (TAT) tables with the aim of further developing a set of therapeutic measures. The study involved 152 women from the Rostov region of the Russian Federation aged 17 to 30 years, exhibiting varying degrees of codependence and living with a drug-dependent partner for at least 5 years. The study was carried out using standardized psychological tests and projective methods, which made it possible to identify the degree of manifestation of co-dependent behavior: test "The scale of co-dependence in relations (Spann Fisher)" (adaptation Moskalenko); “Write a story” technique (stimulus material of tables 2, 4 and 6 of TAT. It has been established that content analysis allows us to determine qualitative differences in the degree of manifestation of co-dependent behavior, which we can identify and use in the initial stage of therapy with a co-dependent client, which means that it allows not only to quickly identify the condition, but also to determine its intensity. Introduction Co-dependent behavior is a learned form of behavior expressed in self-suppressing patterns, which subsequently leads to a decrease in the ability to initiate general participation in various social relationships [1]. At the moment, co-dependent behavior appears to be one of the most common reason for seeking psychotherapy. While for specialists it becomes relevant to identify reliable diagnostic techniques to study the features of this phenomenon [2]. In this regard, an attempt was made to identify prerequisites of co-dependent behavior using projective techniques, which formed the basis of a comprehensive study of respondents from different regions of the South of the Russian Federation [3; 4]. Considering the origin of co-dependence, it is customary to distinguish 3 factors: biological (characteristics of the reactions for each person to various stimuli and effects) [5]; psychological (including personality traits, the presence of psychological trauma at different age periods) [6]; interactions with other people (both intrafamily and social) [7]. The psychological literature describes co-dependent personalities as people who grew up in single-parent family or in a family where one parent was addicted to psychoactive substances (alcohol and/or drugs); have various relationships with addicts; survivors of childhood violence (physical, sexual, emotional) [8]. In modern literature, it is customary to consider the phenomenon of "co-dependence" as a result of adaptation to past experience. When choosing a partner, co-dependent, sometimes unconsciously, makes a choice in favor of an addictive partner, because they see in him patterns of behavior to which they once learned to "successfully" adapt. At the same time, in the rehabilitation of alcohol or drug addicts, there are often cases of relapse provoked by the co-dependent partners, although they themselves is trying to "cure" the addictive patient [9; 10]. Co-dependent behavior is characterized by various psychological features of the manifestation: hypercontrol of yourself and Significant Other, not only in behavior, but also in feelings and thoughts [11; 12]; strictness towards yourself and others; inadequate selfesteem and uncontrolled desire to please others; fear of criticism and harsh reactions to it; uncontrolled fantasies and the need for deception, even without sufficient reasons; the need to feel "necessary and important"; never-ending feeling of guilt and shame for yourself and the Others [13]. The prevalence of psychological protective mechanisms in co-dependent patients is observed: denial, rationalization, alexithymia, denial in the form of the lack of awareness of their reactive actions, as well as the presence of an illusion of irreplaceability, lack of motive for self-development [14]. Research methods 1. Theoretical analysis of relevant scientific literature. 3. Statistical data processing methods were carried out using the STATISTICA 10.0 software package. To identify personality fixations and self-esteem we used the "Write a Story" technique (tables 2, 4 and 6 of TAT), which allow us to determine personal motives and orientations. For us it is extremely important that this technique allows the self-identification of respondents with the main character, the character becomes a component of their own "Self" in the specific situation of the story. The interpretation of the test is based on the understanding that the image that appears in our consciousness in the first fractions of a second after the presentation of the picture triggers an avalanche of associations. An attempt to structure these associations into a framed sentence helps to express the most important "painful" for the author. Thus, a content analysis reveals a symptom of personality orientation. Analyzing the plots of all 4 stimulus materials, we get the opportunity to talk about the syndrome of co-dependent personality. Sample description The empirical study was conducted among co-dependent women aged 17 to 30 years who live with drug addicts (showing co-dependent behavior) and sought psychological help in overcoming difficulties in family relationships. Their addictive partners have a history of substance abuse for 3 months to 6 years (2.3 years in average) and have been treated in rehabilitation centers in the Rostov Region. In total, an empirical study involved 152 codependent women. Participation in the study of all respondents was voluntary. Research results According to the data obtained among the total number of participants in our study, the normative level regarding the presence of the fact of co-dependence was diagnosed in 42 women, accounting for 27.3%. 66 women, representing 44.7% of the total sample, were diagnosed with a moderate rate of co-dependence, and 44 women, representing 28% of the total number of respondents, showed a high level of co-dependence (Figure 1). Based on the presented data, we can state that the following symptoms at varying degree are observed in women with co-dependent behavior: the presence of compulsive actions, guilt, complete surrender to Significant Other, delusion, restrained anger, denial, self-deception, constraint of feelings, low self-esteem, uncontrollable aggression, anger towards oneself, ignoring personal needs, as well as various communication problems. Within the content analysis of stories ("Write a Story" technique), we identified the assessment criteria and were able to present it in the form of the following leading determinants of personality orientation:  Figure 2 graphically shows the results of the frequency analysis, which illustrates the quantitative changes in data for several groups of patterns ( Figure 2):  Frequency of occurrence of the main determinants -"Focus on organization, creation", "Lack of object relationships", "Unrealistic expectations", "Merging, lack of boundaries", "Need for approval", "Need for approval", "Overcoming difficulties", "Hope" in all subgroups is directly proportional to the increase in the level of co-dependence;  In the subgroup 2 (moderate level of co-dependence) determinants "Focus on the object" is much more common than in the subgroup 1 and the subgroup 3;  The frequency of occurrence of the determinant "Dissatisfaction with others" in subgroup 2 is similar to that in the subgroup 1 and significantly differs between subgroup 2 and subgroup 3; The determinant "Overcoming difficulties", which indicates the willingness of respondents from groups 3 and 2 to see difficulties in all situations and overcome them, turned out to be the most pronounced indicator for the co-dependence; these cognitive bias can lead to a tendency to invent difficulties where they do not exist. To determine the level of statistical significance of the obtained results, the Kruskal-Wallis test was used (using the Statistical Processing System R). It was found that the experimental groups have significant differences in performance. For the subsequent determination of the specificity of the differences of the variables in the 3 studied subgroups, a comparison was made using the multiple comparison method for posterior data analysis (Conover test). Thus, 3 groups of statistically significant indicators were identified: • A equable change in the criterion in all 3 subgroups is observed for "Need for approval" (p < 0.001), "Perspective in time" (p < 0.001), "Overcoming difficulties" (p < 0.001), which indicate the motives of the activities of the co-dependent in the need for the approval from the people around them, the understanding that it is necessary to wait out certain negative events and positive changes will soon come, and there is also a tendency to see and overcome difficulties in any situations; • Variance of the criterion is significant between subgroups 1 and 3, as well as 2 and 1, but it is not significant between subgroups 2 and 3 in the indicators "Dissatisfaction with others" (p < 0.001), "Merger, lack of boundaries" (p < 0.001), "Hope" (p < 0.001). So, they may indicate the presence of co-dependence, but not determine the degree of its severity; • Indicators of the determinants "Orientation to the object" (p < 0.005) and "Orientation to each other" (p < 0.001) are statistically significant in subgroups 1, 2 and 3, but they differ in subgroups 3 and 1, which gives us the right to assume that these determinants can characterize the subgroup 2 -moderate level of co-dependence. Conclusions Based on the obtained data and analysis, the following conclusions were made: 1. With moderate co-dependence, women are characterized by the need to search for a problem or a "trick" in everything and everywhere, there is a belief that nothing is given for nothing, but only through hard work; other people appear to be hostile; they need to constantly receive approval from society; the expectation is that someday everything will change and improve; problems of the dependent partner are perceived as problems of the codependent, his value is perceived as a value of the co-dependent, etc. Subgroup 1 -normative level of codependence Subgroup 2 -moderate level of codependence Subgroup 3 -high level of codependence 2. At the same time, in people with high level of co-dependence, these indicators are more pronounced, but the focus on the very cause of addictive behavior -the psychoactive substance -disappears. This indicates that the co-dependent ceases to direct its own efforts to effective assistance to addictive partner and closes in itself. Thus, content analysis allowed us to identify qualitative differences in the degree of manifestation of co-dependent behavior. We can identify these differences during the first session with a co-dependent client, which means we get the opportunity to a more effective design therapeutic program.
2019-11-28T12:36:26.719Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "90fd6f3c638eccf80ee79b9a9b03670b20fef875", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2019/11/shsconf_ictdpp2018_09003.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a8082102d9f12428f2e601a53948597741a6dc4b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
208598935
pes2o/s2orc
v3-fos-license
DNA Sequencing Resolves Misdiagnosed and Rare Genetic Disorders This chapter focuses on the mandatory requirement of DNA sequencing approaches for genetic diagnosis and recurrence prevention of inherited diseases. Sequencing the DNA and coded transcripts has intensely promoted our understanding of functional genomics and the fundamental importance of non-coding genomic sequences in causing heritable diseases, when mutated. Though Sanger sequencing, the first employed approach in identifying genetic mutations has been replaced nowa-days in many laboratories with the highly robust massive parallel sequencing tech-niques, “Sanger” remains vital in countries with limited resources and also of essential importance in validating the results of large scale sequencing technologies. Next generation sequencing (NGS) enabled the parallel sequencing of the whole exome (WES) and whole genome (WGS) regions of human genome and has revolutionized the field of genetic and genomic research in human. WES and WGS have facilitated the identification of the role of previously unrecognized genes in causing neurologic phenotypes, brain structural malformation, and resolved the causal genes in puzzling and misdiagnosed genetic phenotypes. Role of fusion genes and non-coding RNA in causing neurogenetic recessive diseases has been uncovered by the application of NGS platforms, published examples are presented in this chapter. Extensive phenotypic variability that retained patients either as misdiagnosed or undiagnosed cases for years has been correctly diagnosed through NGS research applications. Introduction Since the significant discovery made by Watson and Crick [1] delineating the DNA double helical structure of alternate units (nucleotides) composed of deoxyribose sugar phosphate backbone and nitrogen bases pyrimidines (Cytosine, C and Thiamine, T) and purines (Adenine, A and Guanine, G). And the following crucial findings, Chargaff's rules [2] informing that the quantity of nitrogen bases differs in between species and the numbers of A equal to T, same for C and G [concluding the pairing status), the field of genetics, genomics, and hereditary is magnificently progressed. The biology of the genetic code "central dogma" describes the flow of heritable genetic information from the nuclear DNA through the transcription process into the mRNA that is further translated into proteins or families of proteins. Central dogma of noncoding regions of DNA has also its influences on the stability of mRNA, Exon-intron splicing machinery, and translational efficiency [3][4][5][6]. The order (sequence) of nucleotides within a known or yet undiscovered set of genes is the first check point that dictates the coded messenger message and translated proteins. DNA-regulatory sequences including promoters, un-translated regions, DNA-methylation related (epigenetic and posttranscriptional splicing modifications) interactively play in defining the transcriptome and proteome expression profiles in different tissues of the body. Newly developed sequencing technologies have enabled the discovery of these regulatory and expression-modifier sequences [7][8][9]. Changes in the sequence of DNA-nucleotides located at the coding, non-coding, or splicing regions of the genome are anticipated to amend, in different ways, the genetic message as well as properties of the coded proteins and hence its functions in the cell. These sequence variations are either inherited (passing from a generation to the next through the germline's cells; ovum/sperm) or spontaneous (de novo) in a subject germ cells. Spontaneous mutations will be further potentially inherited, mostly in a dominant pattern, through the subject's descent when his/her reproduction ability is not affected by the mutation. Changes (polymorphic variations or disease underlying mutations) in the DNA sequence may arise through base substitution, small insertion or deletion of bases, structural variations (large deletions or complex rearrangements), dynamic mutations (expansion of repetitive elements of the genome). DNA, cDNA, or RNA sequencing tools are the evidence based investigations that help us as scientists or physicians to identify or "see in Sanger's chart" these nucleotide changes and accurately allocate its genomic position [10,11]. Monogenetic (Mendelian) disorders caused by single gene defect(s) are regularly counted under rare (orphan) diseases. Population with high rate of consanguineous marriages described to have extended multiple generational families that harbor rare monogenetic diseases. The single gene defect can occur on the two copies (alleles) of a gene (homozygous mutant) or on one allele only (heterozygous). Inheritance of Mendelian disorders may be autosomal recessive (the two alleles of an autosomal gene should carry the causative mutation to produce the disease phenotype), dominant (one mutant allele will be enough to cause the genetic disease), or X-linked (the mutant gene is located on the X chromosome) with the disease transmission occurring mostly through the females who are obligate carrier of the X-linked mutation [12,13]. Monogenetic diseases may affect various body systems; cardiovascular, central nervous, peripheral nerves, endocrine, renal, or pulmonary, etc. The clinical phenotypic spectrum of the different distinct categories of these diseases is likely heterogeneous or overlapping which harden the clinicians' decision in making a definitive diagnosis. Academic studies in the field of human genetic diseases as well as diagnostics has been complicated for a long period of time by the remarkable clinical and genetic heterogeneity that were evident for the subgroups of a bunch of familial recurrent diseases involving: congenital muscle dystrophies, limb girdle muscle dystrophies, cortical brain malformation, hereditary spastic paraplegias, hereditary sensory neuropathies, neurodevelopmental, or others. With the evolving NGS technologies progress and discovery has been promptly started. DNA and cDNA high throughput and validation sequencing tools are fundamental approaches that should be implemented in laboratories to reach a correct genetic diagnosis and provide accurate genetic counseling for rare heritable diseases. Genetic diseases may remain for decades undiagnosed or incorrectly managed when sequencing technologies are either not available or not accessible to patients due to its high cost. Genes and mutations identification in patients services all family members; siblings, cousins, nephews, or other relatives allowing carrier detection, premarital planning when first cousin or relative marriage is considered, prenatal diagnosis or preimplantation genetics. These sequencing outcomes mark DNA Sequencing Resolves Misdiagnosed and Rare Genetic Disorders DOI: http://dx.doi.org /10.5772/intechopen.86556 the long term goal of reducing the occurrence or recurrence of genetic disorders in the community achievable. Discovery of new genes and novel genetic "mutations/etiologies" for rare diseases, has exposed the basis of genetic heterogeneity, increased depth of genomic investigations and been intensely empowered, starting 2005, by the emerged technologies of next generation sequencing (NGS) that enabled the massive parallel sequencing (MPS) of millions of DNA or RNA nucleotides at a time [11,14]. Whole Exome Sequencing (WES), one of the NGS platforms, grew into a widely used genetic diagnostic test in certified diagnostic labs over the world as well as a research tool in academic studies. WES targets the variants located in the coding regions and splicing boundaries of genes simultaneously at a time. The protein coding genes have been estimated to constitute ~2% of the human genome. Though WES is a powerful tool for the identification of underlying genetic defect in Mendelian disorders, obviously it lacks the capacity to detect non-coding or regulatory disease causing genetic variations [15][16][17][18]. Whole Genome Sequencing (WGS), most extensive NGS' platform, has the capacity to interrogate the whole genome of a subject; the promoters, the un-translated upstream and downstream genomic ends, intragenic and intergenic regions in addition to the coding and splicing parts. Its applications in monogenetic diseases are still mostly at the level academic research. Its value in discovering new causal roles of previously unrecognized genes in rare inherited diseases came from its nature in detecting non-coding, regulatory and large structural variations arise in subjects' genome [19]. Advances in NGS wet lab methodologies, improvements in informatics pipelines (read alignment, variants call), and the huge released data annotation and analysis platforms lead the new genes discovery, the identification of new etiologies for rare diseases, and new cellular mechanisms contributing genetic syndromes and disorders. The better understanding of molecular biology of gene's mutation constitutes the essentials for new therapeutics [20]. In this chapter we shed the light on live recent examples demonstrating the role of DNA sequencing tools in gene discovery and in resolving the dilemma of certain genetic phenotypes that were undiagnosed for years. Advances in diagnostics and research of Monogenetic diseases To late nineties Sanger sequencing (Chain Terminator Method) was the tool we used to use both in service and research to identify gene mutations or recognize polymorphic sequence variations in particular gene(s). Sanger Sequence is named after Frederick Sanger and his colleagues who had developed the method in late seventies [21,22]. This sequencing method enabled the identification of nucleotides sequence in a single DNA or RNA amplified fragments and hence the changes (variations) from the reference genomes. Sanger Sequencing was highly applicable in diagnostics when a particular gene or few alternative genes are in question. Demonstrative example from author's experience: a well-defined genetic phenotype with two alternative claimed causative genes confirmed true by Sanger sequencing Here, we show the value of Sanger sequencing in resolving, in a fairly good turnaround time, the genetic defect in a group of patients with a phenotype of abnormal cerebral white matter associated with subcortical cysts. The leukodystrophies are a Biochemical Analysis Tools -Methods for Bio-Molecules Studies group of diseases, collectively characterized by primarily white matter involvements at variable degrees of severity ranged from a change in signal intensity, on brain images, to cystic cavitation or vanishing of the brain white matter contents [23,24]. This group of diseases is genetically heterogeneous, however with a good clinical history, examination and high resolution brain imaging, a differential diagnosis can be set and Sanger sequencing can be applied for the few differential genes. The association of distinctive clinical features of macrocephaly (large sized head) detected since birth or shortly thereafter, motor developmental delay, seizures and ataxia precipitated by trauma as well as brain images of diffusely swollen white matter with the very characteristic finding of subcortical cysts preferentially occurring in brain temporal or frontoparietal lobes (Figure 1) suggested a clinical diagnosis of megalencephalic leukoencephalopathy (MLC), an autosomal recessive disease [OMIM # 604004]. A long list of metabolic disorders can be listed for a differential diagnosis. In 75% of these patients, MLC1 gene's mutations are causal for the disease phenotype, whereas in ~20% of cases it is another gene, the HEPACAM/Glia-CAM that contributes the MLC phenotype. Both MLC1 and Glia-CAM are of a reasonable coding regions' size. Application of direct Sanger sequencing had helped several of such patients to get a solid genetic diagnosis of their diseases and allowed their families to use the Sanger sequencing results in performing premarital counseling and preventive measures through the carrier detection and prenatal diagnosis. Thus in cases feature a rather defined phenotype, average sized coding region of genes are in claim, and few alternative candidate causative genes, application of Sanger sequencing empowers the genetic diagnosis in a fair short turnaround time and makes the disease primary prevention quite possible [25]. Immunohistochemistry-guided Sanger sequencing In some other diseases due to a known contributing family of proteins coded by a subset of genes, the roundabout time may be quite consuming to resolve the specific causal gene and hence Sanger sequencing may not be the suitable diagnostic tool particularly when there is a large flow of samples. A good such example is the Limb Girdle muscle dystrophies (LGMDs) which constitute a large group of progressive muscle weakness and wasting. Each of the several main groups of LGMD possesses a list of several subtypes caused by genetic mutations in many of muscle proteins related genes. Muscle biopsy (a specimen of muscle fibers) used in immunohistochemical staining is an invasive diagnostic approach applied in patients with LGMDs aiming to detect the specific missing (deficient) muscle protein, secondary to gene's alteration using mono-or poly-clonal antibodies. Sarcoglycanopathies is a known genetic group of LGMDs. It is comprised of a family of four proteins forming four subgroups of sarcoglycanopthies; alpha, beta, gamma, and delta annotated according to the encoded protein and the corresponding gene [26]. The antibodies implemented in the immunohistochemistry procedure are anticipated to have the capacity to confirm the diagnosis of sarcoglycanopathy-LGMD and the level of the specific protein expression in the muscles, or in the best case scenario may also suggest the specific type of deficient sarcoglycan, whether alpha or beta, etc. However, in order to confidently determine which of the four sarcoglycan genes, α, β, Ƴ, or delta harbors a heritable causative pathogenic mutation, gene sequencing should follow the immunohistochemistry. In such cases, Sanger sequencing guided by the immunohistochemistry results possibly will be a valuable diagnostic approach in areas of limited resources, particularly in extended families with multiple affected subjects across successive generations (Figure 2). However, many of the times this is not the case since the antibodies cross react to its different proteins subtypes. In such situation, though the time required to interrogate multiple related genes, each separately and release the results may be relatively long, however the sequencing outcomes' significance in disease's prevention and recurrence worth the time and efforts. Challenges for the diagnostic application of Sanger sequencing Genes of extensively large coding regions like the FBN1, Titin, dystrophin, and many others constitute a challenge to use Sanger direct sequencing as a robust tool to characterize the underlying mutations. As a kind of solution, numerous commercial labs are limiting their molecular diagnostic service to specific gene's mutations' hot spots reported in the populations, when applicable. However, this approach is of a limited value when the case harbors a new or rare gene mutation. In rather complex or non-specific clinical genetic presentations that are either of un-determined causative genes or of negative gene panel's results for a particular group of diseases, the Sanger sequencing remains unaccommodating. The evolving roles of non-coding RNA and regulatory sequences alterations in causing heritable genetic diseases toughen the value of Sanger sequencing in diagnostics and human genetic research academic studies. For all of these essentials new accommodating approaches were in need to satisfy the health care providers' goals to better serve patients with genetic diseases and the researchers need toward discovery of new genes and new etiologies for undiagnosed or misdiagnosed genetic disorders. Targeted genes panel is a designed approach aiming to collectively sequencing a group of genes of a known causative relation to a particular inherited genetic disease or a group of closely related diseases. Examples involve panels for Limb girdle muscle dystrophy, hereditary spastic paraplegias (HSPs), inherited deafness, etc. This approach essentially and basically requires a continuous update of the designed panel to involve newly discovered genes aiming at avoiding false negative results. HSPs are a large group of diseases characterized by progressive lower limb spasticity, raised heal (tip toes) gait and associated in its complex phenotype with brain images abnormalities, developmental delay, ataxia, and other features. The list for HSPs associated gene defects is huge involving around 80 genes and continues to expand further [27]. Commercial HSP gene's panel are offered by various diagnostic laboratories, however pitfalls of negative results that falsely decline the diagnosis of HSPs is not uncommon. Academic studies discover newly characterized HSP related genes yearly; this has to be regularly updating the diagnostic market. A proper alternative tool will be one of the cut edge NGS technologies. NGS role in mapping genes and mutations to monogenetic diseases' phenotypes WES and WGS yield a high throughput set of data. Of the interpretation process, these raw sequencing data/reads should be aligned to human reference nuclear genome. Differences between the subjects' sequencing reads and the reference genome are annotated as "variations" which may be counted either as common "polymorphic" or rare variants. The file contains all annotated variants of subject's sample is designated as the variants calling files (VCF). The NGS' chemistry and nucleotide capture efficiency, depth of sequencing coverage, as well as bioinformatics pipelines employed in calling the variants of subjects' genome including the quality of mapping/alignment to the reference genome govern the potentials of the NGS' output [VCFs] in genes identification [28][29][30]. The key challenge in NGS data analysis is to identify the disease causal variants against the tremendous number of variants that are present at a low/rare frequency in genome or annotated, in-silico, as deleterious/pathogenic. Variants prioritization is the protocol employed to select the most potential disease causing variants. The diagram below (Figure 3) represents the number of variants originally called in WGS data of a subject and the filters sequentially applied aiming to highlight the most potential candidate disease related variants. Gene discovery: identification of genes underlying a worldwide known clinical diagnosis Kabuki syndrome (KS), OMIM # 147920 is a developmental, musculoskeletal, and intellectual disability with distinctive facial features genetic syndrome. This syndrome was first described, clinically, in families from Japan in 1981 [31] then described worldwide in patients from different ethnic groups. Intensive research has been made using the emerged high throughput sequencing technology to identify the KS causative gene, however unsuccessfully. The sporadic nature of KS (affected patients had negative family history and unaffected parent) harden the path of gene identification. The first Kabuki-associated gene (Lysine methyltransferase 2D, KMT2D, originally named as MLL2, a gene that regulates the expression of several downstream targets) was discovered only by late 2010 [32] along with the further developments made to WES and the process of variants identification and interpretation. KMT2D spontaneous gene mutations were found in over 75% of patients. A second X linked functionally related gene lysine demethylase 6A (KDM6A) contributes 20% of KS cases [33]. This illustrates how it took about 30 years to identify the underlying gene(s) of a well-defined inherited genetic phenotype. Though the most modern high throughput technology was available for quite number of years, however refinement and optimization of variants calling pipeline and variants analysis was recurrently visited to evolve into successful gene discovery for KS. NGS approach resolves puzzling clinical phenotypes With the author experience and the clinical examples discussed below we are aiming to outline the significance of NGS in driving research's discovery into clinical implementation and patients care. Hereditary sensory and autonomic neuropathies, HSANs, are a genetically heterogeneous group of diseases, its phenotypic characteristics involve pain sensitivity (sensory loss) with its sequels, decreased sweating (hypohydrosis/autonomic function), plus mild motor weakness in a subset of patients [34]. Though the mechanism of development of disease pathology is not well understood, however; a known, short list of underlying genes were characterized and sequenced when the unique HSAN phenotype is suspected. A consanguineous pedigree had two children, a boy and a girl aged 14 and 10 years respectively displayed a phenotype resembled that of hereditary sensory and autonomic neuropathies (HSANs). The clinical presentations characterized by two distinct features, sever pain insensitivity associated with hypohydrosis since birth along with the sequels of impaired pain sensation and severe aseptic destruction of large and small joints as well as the vertebrae (Figure 4). The two affected siblings had been examined by multiple local and international experts, the clinical diagnosis given was a general one describing an immune inflammatory disease (due to the joints destruction), however the association of the severely remarkable pain insensitivity remained unexplained in the context of immune-inflammation. WGS revealed, unexpectedly, a homozygous mutation in LIFR. LIFR mutations have been associated with Stüve-Wiedemann syndrome (SWS), a lethal autosomal recessive skeletal dysplasia that may be associated with mild reduced pain sensation in atypical long survivors. The complexity (overlapping phenotypes) as well as the striking severity of pain insensitivity phenotype, which phenocopy HSANs and atypically associated with extensive bone destruction challenges the diagnosis. The WGS had resolved this case dilemma, provided the family opportunities for preimplantation genetics as well as premarital counseling for other family members. Not only had that, but also reveals a new mechanism of LIFR's functional alteration (defective glycosylation of the mutant protein) [35]. WGS finding in these cases warrant the attention to consider LIFR testing in genetically unresolved phenotypes mimics HSAN. NGS maps neurodevelopmental axonal guidance phenotype to a previously unrecognized gene Neurodevelopmental disorders associated with brain malformation are the most extensively large group of neurological disorders. This group incorporates a broad spectrum of manifestations primarily involving the central nervous system and variably associated with motor and/or psychomotor delay, microcephaly, epilepsy, specific behavior, abnormal movements, eye symptoms, dysmorphic features, or hypotonia. Brain imaging is very helpful for the clinical diagnosis; however it remains challenging to reach a firm genetic diagnosis without NGS approaches. Each individual disease of this group is of the rare diseases. Some underlying genes have been identified and characterized; many others stay unknown or uncharacterized for its role in causing such diseases, waiting further research and discoveries. We present here such example of a family with three affected siblings, a boy and a twin sister born to a consanguineous parent. The clinical phenotype of global developmental delay, learning difficulties associated with mild dysmorphism, hearing impairment was presented at variable severity between the older boy and the two affected female siblings. This clinical phenotype though can be categorized as neurodevelopmental disorder, however is very nonspecific. The older boy was given a provisional diagnosis of autistic spectrum hyperactivity due to some related features. The brain imaging of cortical malformation (polymicrogyria-cobblestone complex), central atrophy, and axonal guidance defects were variably shown in the three siblings. WGS applied for 8 members of this family (6 siblings: 3 affected and 3 unaffected plus parent) followed by bioinformatic variants analysis and genes functional reviews have successfully filtered the SNVs yield and identified a novel nonsense mutation in a previously unrecognized gene, Schwanomin-Interacting Protein1 (SCHIP1) (Figure 5) [36]. SCHIP1 was not previously associated to human neurodevelopmental disorders or brain malformation. However, mouse studies knocked out schip1 isoforms produced a phenotype of brain axonal guidance defects, similarly to that detected in these patients. This gene has multiple isoforms including a fused gene (IQCJ-SCHIP1) isoform with variable tissue expression pattern and reported to have a role in axonogenesis during brain development. This example demonstrated the significant role of massive parallel sequencing approach as well as reviews of studies developed in mice with rather similar brain imaging phenotype in characterizing a new gene contributing neurodevelopmental-brain malformation phenotype. Phenotypic sequels of severe pain insensitivity, aseptic painless fractures and inflammation of large and small joints in patients with LIFR mutation [35]. Permission obtained from the copyright owner. Multiple images display: prominent spine kyphoscoliosis. Swollen knee joints; the scar in the left knee is due to surgical procedure treating joint's inflammation. Neuropathic WGS reveals new non-coding RNA minor splicing component's machinery that maps to a pure congenital cerebellar ataxia phenotype Hereditary Cerebellar Ataxias (HCAs), the uncoordinated gait and body movements, can be inherited as autosomal dominant or recessive traits or in association with other neurological diseases. Hereditary ataxias are due to degeneration of cerebellar neurons or spinocerebellar tracts dysfunction [37]. Many several genes, its coding regions have been identified as causatives for the HCAs. The emerging regulatory role of small non-coding RNA is evolving as a new mechanism leading human genetic diseases. WGS is particularly relevant to the identification of mutations in non-coding regions of the genome. An example, the 2nd worldwide of such condition was recently published [38]. In this referenced article, a large interrelated kindred had 6 patients with hereditary ataxias of unknown genetic etiology. Delayed speech and developmental milestones, congenital hypotonia, dysarthric speech, intention tremor, head nodding, and ataxic gait with a falling tendency were the main complains, however at variable severity among the affected patients. Brain images support the cerebellar involvements (Figure 6). Clinical diagnosis of an autosomal recessive cerebellar ataxia was suggested. Genetic investigations involving gene panel test and WES were performed; however results came back as negative. WGS performed, on research basis, for 11 members of two branches of the extended family revealed interesting, nevertheless complex result that required functional testing to verify the causative gene and the biological impact of the genomic mutation. WGS data analysis identified a variant (SNV) that was located in the promoter region of a protein coding gene POLDIP3 and fell as well in a small nuclear non-coding RNA gene (RNU12) that was transcribed from the opposite strand (Figure 7). Interestingly, RNU 12 was reported as a component of the U12-minor splicing machinery that functions in splicing of genes containing minor introns. Experimental investigations involved quantitative expression of the genes, RNA seq, semi-quantitative analysis of retention of minor introns containing genes (due to defective splicing machinery) established the causal relation of RNU12 to the disease phenotype in this large family. This story underscores the value of WGS in uncovering the unrecognized regulatory role of snRNU12 gene in human brain development and function. And the value in identifying the molecular gene defect in an example of monogenetic diseases that would have been remained uncovered when only WES was undertaken. This gene's result has been used by healthy family members in carrier detection, premarital counseling and prenatal diagnosis. The ages at which patients of this kindred have getting the genetic diagnosis of their disease were as of 25 year old (for the female proband), 22 year old for her brother, 15 and 10 years old of her sisters (first branch), 19 and 13 years old of female siblings of second branch. This highlights how NGS empowered the diagnostic odyssey of monogenetic diseases translating research into clinic improving targeted patients care and prevention of diseases' recurrence in family and community. Conclusion Advancement of new therapeutics for genetic diseases is definitely influenced by research and technologies that support swift, reliable, and interpretable OMICs
2019-10-17T09:05:36.529Z
2019-09-30T00:00:00.000
{ "year": 2020, "sha1": "3e1aceb3e903178915bda6230a69ba5554e8de66", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/67852", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "f2bfee981d9da99fd6ad00ef07e875678c86749f", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
9414348
pes2o/s2orc
v3-fos-license
The effect of nephrectomy on Klotho, FGF-23 and bone metabolism Background Increased concentration of fibroblast growth factor 23 (FGF-23) and decreased levels of soluble Klotho (sKL) are linked to negative clinical outcomes among patients with chronic kidney disease and acute kidney injury. Therefore, it is reasonable to hypothesize that GFR reduction caused by nephrectomy might alter mineral metabolism and induces adverse consequences. Whether nephrectomy due to urological indications causes derangements in FGF-23 and sKL has not been studied. The aim of the study was to evaluate the effect of acute GFR decline due to unilateral nephrectomy on bone metabolism, FGF-23 and sKL levels. Methods This is a prospective, single-centre observational study of patients undergoing nephrectomy due to urological indications. Levels of C-terminal FGF-23 (c-FGF-23), sKL and bone turnover markers [β-crosslaps (CTX), bone-specific alkaline phosphatase (bALP) and tartrate-resistant acid phosphatase 5b (TRAP 5b)] were measured before and after surgery (5 ± 2 days). Results Twenty-nine patients were studied (14 females, age 63.0 ± 11.6, eGFR 87.3 ± 19.2 ml/min/1.73 m2). After surgery, eGFR significantly declined (p < 0.0001). Nephrectomy significantly decreased sKL level [709.8 (599.9–831.2) vs. 583.0 (411.7–752.6) pg/ml, p < 0.001] and did not change c-FGF-23 concentration [70.5 (49.8–103.3) vs. 77.1 (60.5–109.1) RU/ml, p = 0.9]. Simultaneously, alterations in bone turnover markers were observed. Serum concentration of CTX increased [0.49 (0.4–0.64) vs. 0.59 (0.46–0.85) ng/ml, p = 0.001], while bALP and TRAP 5b decreased [23.6 (18.8–31.4) vs. 17.9 (15.0–22.0) U/l, p < 0.0001 and 3.3 (3.0–3.7) vs. 2.8 (2.3–3.2) U/l, p < 0.001, respectively]. Conclusions Nephrectomy among patients with preserved renal function before surgery does not increase c-FGF-23 but reduces sKL. Moreover, nephrectomy results in derangements in bone turnover markers in short-term follow-up. These changes may participate in pathogenesis of bone disease after nephrectomy. Introduction Fibroblast growth factor 23 (FGF-23) and Klotho are key players in maintaining mineral homoeostasis. Increase in FGF-23 concentration is accompanied by Klotho reduction as chronic kidney disease (CKD) worsens [1,2]. Both molecules have gained attention for their association with important clinical outcomes such as cardiovascular incidents [3] and mortality [4]. It has been thought that FGF-23 and Klotho may link bone mineral disturbances with cardiovascular mortality. 3 (sKL) after mild-to-moderate reduction in glomerular filtration rate (GFR), which is evoked by nephrectomy. This might be especially important with regard to living kidney donation and patients undergoing nephrectomy due to urological indications in light of negative connotations between FGF-23 and clinical consequences. It is a matter of debate whether and how acute GFR decline predisposes to bone metabolism disturbances. The impact of those changes on bone metabolism has been studied in two groups of patients: living kidney donors and patients undergoing nephrectomy. On the one hand, nephrectomy due to renal cell carcinoma is a significant risk for osteoporosis and increased fracture risk [7], but, on the other hand, data concerning living kidney donors are conflicting. It is reported that even though living kidney donors experience alterations of hormones involved in bone metabolism [8][9][10], there is no increased fracture risk [11]. The precise mechanism by which acute GFR decline predisposes to bone metabolism disturbances is largely unknown. A nephrectomy due to urological indications in patients with preserved renal function offers unique clinical model to study effect of acute mild-to-moderate GFR decline, as the patients serve as their own controls without any confounding factors always present among patients with CKD and even more pronounced in AKI setting. Regarding the fact that alterations in FGF-23 and sKL disturb bone health early in the course of CKD, the study was undertaken to evaluate the effect of acute GFR decline due to unilateral nephrectomy on bone metabolism, FGF-23 and sKL levels. The secondary aim was to assess whether changes in the concentration of the above molecules evoked by nephrectomy are associated with alterations in the markers of bone formation and/or resorption. Study design This study is a post hoc analysis of frozen blood and urine samples from the previously reported data [12]. The former study was designed as a prospective, single-centre trial and patient served as a self-control group. All patients undergoing nephrectomy due to urological indications in the Department of Oncological and General Urology, Sniadecki Provincial Hospital in Bialystok (Poland), were enrolled unless end-stage renal disease requiring dialysis (four patients) or lack of informed consent (five patients) was present. After the surgery, all the patients underwent standard hydration protocol with isotonic saline. During the first 24 h, 2500 ml of intravenous fluid was administered and nil per os was prescribed; on the second day, 1000 ml of intravenous fluid was given and patients were allowed to start progressive fluid oral intake. From the third day, no intravenous fluids were given. The study was approved by the local ethics committee. The protocol adhered to the principles of the Declaration of Helsinki and written informed consent was obtained from each participant. Laboratory measurements Urine and venous fasting blood samples were collected on the morning prior to surgery and after nephrectomy (5 ± 2 days after nephrectomy depending on the duration of hospitalization). To prevent possible changes due to intra-day variability, our samples were collected always at early morning, after overnight fasting. Plasma, serum and urine were centrifuged, aliquoted and frozen at −70 °C until assayed. Calculations and statistics To estimate phosphate and calcium renal handling, we used the ratio of tubular maximum reabsorption to the glomerular filtration rate (TmPO 4 /eGFR-as an index for the phosphate renal threshold) and urinary fractional excretion of calcium (FE Ca ). The calculations were made as follows: [13,14], and FE Ca = [(urinary calcium × serum creatinine)/ (serum calcium × urinary creatinine)] × 100%. Variable distribution was tested with the Shapiro-Wilk W test of normality. The normally distributed data were presented as mean ± 1 SD, the skewed data as median (interquartile range; IQR). Before statistical computations, logarithmic transformations were performed on skewed variables to obtain normal distribution, if possible. Student's t test for paired samples or Wilcoxon signed-rank test was used to compare continuous variables at selected time points. Changes in measured parameters were expressed as delta (∆) and calculated as follows: postoperative minus preoperative value. Associations between deltas were assessed using bivariate correlations with Pearson or Spearman's test depending on meeting the assumptions. Mixed regression analyses accounting for time effect were performed in search for longitudinal associations between variables of interest. Results were reported as beta coefficient β and 95% confidence intervals (95% CI). A two-tailed p value of <0.05 was considered statistically significant. All computations were performed with Statistica 12 (StatSoft, Tulsa, OK, USA). Characteristics of the study population We enrolled twenty-nine patients (14 females) who underwent nephrectomy. The mean baseline eGFR was 87.3 ± 19.2 ml/min/1.73 m 2 . Additional baseline characteristics of the studied population are detailed in Table 1. After nephrectomy in four subjects anuria occurred, thus parameters evaluated in urine were measured in 25 subjects. In these patients, hydration protocol was violated, as they received fluids according to their condition. Statistical computations were repeated after exclusion of these cases and yielded similar results. None of the patients required dialysis during the study duration. Groups with partial and radical nephrectomy and with tumour and non-tumour did not differ significantly concerning evaluated parameters except from non-significant difference in baseline value of sKL between tumour and non-tumour group (tumour group-680.2 ± 131.2 vs. nontumour group-805.3 ± 185.9, p = 0.05). Effect of nephrectomy on calcium, phosphate and iPTH Biochemical parameters before and after nephrectomy are presented in Table 2. As expected, eGFR declined significantly after surgery compared with the baseline values (87.3 ± 19.2 vs. 69.8 ± 24.7 ml/min/1.73 m 2 , p < 0.0001). We observed significant decline in serum calcium and phosphate concentration (p < 0.0001 and p = 0.002, respectively), while urinary calcium (p = 0.02) and phosphate excretion (reduction in TmPO 4 / eGFR, p = 0.001) increased. There was no significant change in intact PTH concentration during the study period. Phosphatemia Effect of nephrectomy on c-FGF-23, sKL There was a significant decrease in sKL level (p < 0.0001) after nephrectomy (Fig. 1a). Serum concentration of c-FGF-23 was not changed by the procedure (Fig. 1b). Effect of nephrectomy on markers of bone resorption and formation Nephrectomy resulted in alterations of bone turnover markers. Serum CTX increased after surgery compared to baseline (p < 0.01; Fig. 2a), while serum TRAP 5b and bALP concentration decreased (p < 0.001; Fig. 2b and p < 0.0001; Fig. 2c). Neither ∆CTX, ∆TRAP 5b nor ∆bALP correlated with ∆eGFR. ∆CTX evoked by nephrectomy was negatively associated only with changes in sKL concentration (rho = −0.4, p = 0.03) and with alterations of serum phosphate (rho = 0.4, p = 0.04). In mixed regression model accounting for time effect, none of aforementioned parameters significantly predicted CTX concentration. Discussion The aim of this study was to test the hypothesis, whether nephrectomy changes the concentration of c-FGF-23, sKL and affects bone metabolism. Our data show that acute reduction in GFR after nephrectomy does not alter c-FGF-23 concentration in short-term follow-up. It is in line with recent study performed in living kidney donors, which showed rather FGF-23 reduction but not the increase in short term [8]. On the other hand, we are aware that decrease in calcium concentration, observed in our study, might prevent FGF-23 increase, as was shown previously [16]. However, we did not observe any association between calcium and FGF-23 changes. These results are in contrast to data obtained from patients with AKI, in whom elevated FGF-23 levels are observed already after 1-h of AKI onset, with more than tenfold rise at 24 h [5]. Moreover, in human study it was shown that FGF-23 increase, at the end of cardiac surgery predicted AKI development [6]. The mechanisms underlying augmented FGF-23 levels in patients with AKI are unknown. Our data shed some light on potential mechanism, as it shows there are two possible explanations. Either there must be a threshold of acute GFR decline beyond which FGF-23 production escalates or there is a different factor involved, associated with acute state in the setting of AKI, activating FGF-23 increase. Currently two assays measuring FGF-23 levels are available. Intact FGF-23 (iFGF-23) assay that detect only a biologically active form and second one, used in our study, an assay measuring both c-terminal fragments and intact form of FGF-23. Thus, if intact form of FGF-23 is elevated then the c-terminal assay also would detect the rise. Of course when c-terminal assay detects elevation in FGF-23 it is difficult to say if there is a rise in iFGF-23 or increase in FGF-23 cleavage. Since we did not observe any changes in c-terminal FGF-23 assay, we can hypothesize that in our study neither no increased synthesis, nor degradation occurred. However, without c-FGF-23/iFGF-23 ratio to support our opinion, an alternative explanation is possible. Theoretically, no change in c-terminal FGF-23 measurement could . 2 Impact of nephrectomy on serum levels of a cross-linked C-telopeptide of type 1 collagen (CTX), b tartrate-resistant acid phosphatase 5b (TRAP 5b) and c bone-specific alkaline phosphatase (bALP). Data are presented as median and interquartile range level, based on power calculations based on two previously published data (for iFGF-23 after nephrectomy [18], and c-FGF-23 after hip replacement procedure [19]). Whether nephrectomy induces FGF-23 increase with all its negative consequences is especially important for living kidney donors. Data from kidney donors showed that in long-term follow-up after nephrectomy, FGF-23 increases [9,10,18]. These results differ from those of our study, although we investigated short-term changes and our study population were patients with urological indications for nephrectomy. We may hypothesize that the increase in FGF-23 occurs at higher GFR decrease than observed in our study or our follow-up was relatively too short to detect changes. A rise in FGF-23 is an effect of GFR decline, and time is essential for those changes to become evident. Our finding that FGF-23 levels are constant, despite nephrectomy is especially interesting in the context of results obtained by Goebel et al., who reported significant increase in FGF-23 after orthopaedic procedures [19]. It points that rather other stimuli than surgery by itself induces this molecule's production. This hypothesis is in line with data obtained in animal experiment where shamoperated rats FGF-23 levels were stable [20]. Results from ICU patients, where FGF-23 increase was observed despite normal renal function, make the question about factors determining FGF-23 levels even more intriguing [21]. In agreement with previous reports [22], we observed a significant decrease in serum sKL after renal mass reduction, reaffirming the statement that kidneys are the major source of sKL. Study in living kidney donors with longer follow-up demonstrated that sKL increased after the initial postoperative reduction but still was lower than pre-donation [8]. Apart from Klotho's role in ageing and phosphate homoeostasis [23], its soluble form acts as an endocrine protein that exerts pleiotropic actions, including protection of endothelial function by its antioxidant properties, inhibition of vascular calcification, suppression of fibrosis and inflammation [24,25]. Reduced sKL may therefore contribute to many complications but how it translates to any patient-oriented measures is still a matter of debate. Even though sKL is discussed as a novel biomarker for progression in CKD [26], there are conflicting data from human studies regarding influence of sKL levels on clinical outcomes, reporting both positive [27] as well as neutral effects [28]. Acute reduction in GFR resulted in derangements in minerals handled by the kidneys, e.g., decreased serum calcium and phosphate levels. The most probable cause of reduction in phosphate concentration was increase in renal losses. Our data showed also increased calcium fractional excretion after nephrectomy. Even though decreased serum calcium is connected to disturbed vitamin D metabolism [29], hypocalcemia may not be specific only to nephrectomy, since hypocalcemia and secondary increase in PTH were observed after different abdominal surgeries [30]. In our opinion, hypercalciuria may be partly caused by sKL reduction, as lack of this molecule diminishes the activity of TRPV5 channels [31] and leads to increased renal calcium excretion although we did not observe significant association between the degree of calciuria and magnitude of sKL reduction. Another compelling finding of the present study is that an acute decline in GFR, caused by nephrectomy, is associated with the alterations in both markers of bone metabolism: resorption and formation. We have shown in our study the rise in CTX and decrease in bALP and TRAP 5b after nephrectomy. This finding seems to be conflicting as CTX and TRAP 5b are both markers of resorption, and we have found their opposite behaviour after surgery. Since CTX tends to accumulate as the renal function declines [32,33], TRAP 5b, which is degraded in the liver, is more accurate in our study population, because kidney function has no effect on circulating TRAP 5b activity [34,35]. Our study suggests that surgically induced nephron loss leads to decrease in bone turnover, which could disintegrate bone homoeostasis. Although we are aware that due to short-term evaluation in our study, one may speculate that reported changes are rather the effects of the surgery but not renal mass reduction. However, after hip replacement surgery, one week after procedure, no changes in serum calcium, phosphorus, total alkaline phosphatase and urinary phosphorus were reported [36]. Based on current sparse data from human [36] and animal studies [20], it seems that rather nephrectomy but not the surgery induces changes in bone metabolism. Broader impact of the changes initiated by nephrectomy is unclear. Animal work shows that subtotally nephrectomized rats had alterations of the structural and mechanical properties of cortical bone material [37]. Human data reported by Bagrodia et al. [7] revealed the association of radical nephrectomy with higher risk of postoperative osteoporosis and fractures, showing the superiority of partial over radical nephrectomy. On the other hand, in the study of kidney donors, the fracture rate was not significantly higher compared to controls [11], although others reported disturbances in bone metabolism markers [10]. Further studies concerning this matter are needed. If the association between nephrectomy and disturbed bone metabolism is confirmed in a larger study, prevention of changes in bone health might become important concern in management of kidney donors and urological patients. There are some limitations of this study. We are aware that there is a difficulty in assessment parameters changes after nephrectomy, caused by the possible influence of the fasting before operation, undergoing anaesthesia and intravenous hydration. However, at least with regard to c-FGF-23, recently published study showed that acute volume changes do not impact its measurements [38]. The study population consisted of patients with various indications for nephrectomy; therefore, we cannot exclude that the presence of different diseases had an impact on our conclusions. Since our results were uniform and each patient served as their own control, it seems very unlikely although cannot be ruled out. Moreover, our aim was to evaluate the impact of the nephrectomy on FGF-23 and sKL concentrations in patients with preserved renal function; nevertheless, we did not study living kidney donors. Thus, our study's findings might not be relevant to this specific group. Additional study with longer follow-up is needed to confirm that our results are consistent over the time. However, we think that this study makes interesting point in the discussion about FGF-23 short-term behaviour after kidney injury, showing that nephrectomy may differ from other types of kidney damage. The study was not designed to examine patients' outcomes, which does not allow for conclusions regarding causality. Finally, the comparator group and assessing the degree of bone loss by a bone mineral density technique would undoubtedly strengthen our results. In summary, we present data showing neutral effect of GFR reduction on FGF-23 concentration of patients undergoing nephrectomy due to urological indications. Evoked reduction in renal mass causes decrease in sKL level. Whether it translates to patient-oriented clinical outcomes requires further investigation. Moreover, nephrectomy resulted in derangements in bone turnover markers. These changes may participate in pathogenesis of bone disease after nephrectomy.
2017-08-02T23:48:48.581Z
2017-01-27T00:00:00.000
{ "year": 2017, "sha1": "284680bc3a46c3fe58943dff4b3b60f7ba824cc2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11255-017-1519-9.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "284680bc3a46c3fe58943dff4b3b60f7ba824cc2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
228116316
pes2o/s2orc
v3-fos-license
On The Generalized Natural Transform Integral transform method has wide range of applications in the various fields of science and engineering. In most of the cases the physical phenomenon is converted into an ordinary differential equations and partial differential equations which can be solved by integral transform method. This is the basic thing by which the researchers are being motivated to define new integral transforms and used to solve many problems in the field of applied mathematics. Recently, the new integral transform Natural transform (N-transform) was introduced by (Khan and Khan, 2008) and studied its properties and some applications. Later on (Silambarasan et. al., 2011 and Belgacem et. al., 2012) defined the inverse Natural transform and studied some properties and applications of Natural transforms. The distribution theory provides powerful analytical technique to solve many problems that arises in the applied field. This gives rise to define the various integral transforms to the distribution space (Lookner, 2010, 2012 & 2013, Omari, 2014, Shah, 2015, Pathak, 1997, Schwartz, 1950, 51 and Zemanian, 1987).The aim of this paper is to extend the Natural transform in the distributional space of compact support and to investigate some properties and theorems of the generalized integral transform. 1.1. The Natural Transformation Introduction Integral transform method has wide range of applications in the various fields of science and engineering. In most of the cases the physical phenomenon is converted into an ordinary differential equations and partial differential equations which can be solved by integral transform method. This is the basic thing by which the researchers are being motivated to define new integral transforms and used to solve many problems in the field of applied mathematics. Recently, the new integral transform Natural transform (N-transform) was introduced by (Khan and Khan, 2008) and studied its properties and some applications. Later on (Silambarasan et. al., 2011 and defined the inverse Natural transform and studied some properties and applications of Natural transforms. The distribution theory provides powerful analytical technique to solve many problems that arises in the applied field. This gives rise to define the various integral transforms to the distribution space (Lookner, 2010, Omari, 2014, Shah, 2015, Pathak, 1997, Schwartz, 1950, 51 and Zemanian, 1987.The aim of this paper is to extend the Natural transform in the distributional space of compact support and to investigate some properties and theorems of the generalized integral transform. The Natural Transformation The Natural transform of the function f t ( )∈ ℜ 2 is sectionwise continuous, exponential order and defined in the set where s and u are the transform variables and is defined by an integral equation If R(s,u) is the Natural transform, F(s) is the Laplace transform and G(u)is Sumudo transform of function f(t) ∈ A then we can have Natural-Laplace and Natural-Sumudo duality as and We can extract the Laplace, Sumudu, Fourier and Mellin transform from Natural transform and which shows that Natural transform convergence to Laplace and Sumudu transform (Shah et. al., 2015). Moreover Natural transform plays as a source for other transform and it is the theoretical dual of Laplace transform. Further study and applications of Natural transform can be seen in (Silambarasan et. al Basic Properties of Natural Transform 6. If F(s, u) and G(s,u) are the Natural transforms of respective functions f(t) and g(t) both defined in set A then  f g u F s u G s u * . , , Generalized Natural Transform The author Deshna Loonkar (Lookner et. al., 2013) have studied distributional Natural transform and motivated from that study, here we construct the testing function space to define the generalized integral transform and prove some theorems of generalized Natural transform. Testing function space Where This  a b , is linear space under the pointwise addition of function and their multiplication by complex numbers. Each γ k is clearly a seminorm on  a b , and γ 0 is a norm. We assign the topology generated by the sequence of seminorm γ k k ( ) = ∞ 0 there by making it a countably multinormed space. Note that for each fixed s and u the kernel We call Ω f the region (or strip) of definition for  f t ( )     and w 1 and w 2 the abscissas of definition. Note that the properties like linearity and continuity of generalized Natural transform will follows from ( ) is analytic on Ω f Proof: Let (s, u) be arbitrary but fixed point in Choose the real positive number a,b and r such that Re . Let ∆S be the complex increment such that ∆S r < and as ∈  , so that equation (8) is meaningful. We shall now show that as ∆ ∆ , . Since f a b ∈ ′ ℜ , this will imply that where 0 . We may interchange differentiation on s with differentiation on t and using Cauchy integral formula so that equation (8) becomes Now for all ξ ∈ C and −∞ < <∞ ( ) where M is constant independent of ξ and t. Moreover The RHS is independent of t and converges to zero as ∆S → 0 . This shows that ψ ∆S converges to zero in  a b , as ∆S → 0 which completes the proof of theorem. Similar proof can be made for the another variable u. Theorem 2.2 [Characteriztion Theroem] The necessary condition for the function Rf(s,u) to be the Natural transform of generalized function f are that Rf(s,u) is analytic on Ω f and for each closed strip u s a s u b , : Re . The polynomial P will depend in general on a and b. Proof: The analyticity of R f (s,u) has been already proved in the previous theorem. By the definition of the Natural transform, f is a member of ′  a b . where ω ω 1 2 < < < a b so that there exists a constant M and non-negative integer r such that for a Conclusion In this paper we extended the Natural transform in the distributional space of compact support and so defined generalized Natural transform. The analyticity theorem and inversion theorem are proved. This paper might be a new window for the researcher to study of generalized integral transforms.
2020-10-08T15:32:38.484Z
2018-09-06T00:00:00.000
{ "year": 2018, "sha1": "04398ed73db88f9320e40ec2108fefac751e2f66", "oa_license": "CCBY", "oa_url": "https://mjis.chitkara.edu.in/index.php/mjis/article/download/46/34", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "04398ed73db88f9320e40ec2108fefac751e2f66", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
251517548
pes2o/s2orc
v3-fos-license
HIV/AIDS risk-reduction options as predictor of female sex workers’ sexual behaviour Background: Sex workers are highly vulnerable to HIV infection and suitable transmission groups in the scope of the HIV epidemic. The study investigated the association among HIV/AIDS risk-reduction options as predictors of female sex workers’ behaviour. Methods: The study used a cross-sectional research design. The quantitative survey involved 140 women, who were selected using simple random sampling techniques. Results: The findings of the study show that looking for a job, disagreement in the family, death of parents, peer pressure, and attraction of town life were indicated as major pulling and pushing factors for sex worker engagement. The majority of respondents is aware of HIV/AIDS and practice HIV/AIDS risk-reduction options. Conclusion: Age, alcohol use, difficult to negotiate with clients to use condoms, and disapproval of clients to use condoms were significant predictors of consistent condom-using behaviours of female sex workers. This study recommended that life skills training and existing strategies enable sex workers to develop skills that help them resist the pressures that come from their clients. Introduction Sex work has been defined as the provision of sexual ser vices in exchange for money, goods, or other benefits. Most sex work has a strong economic basis with moti vations ranging from survival, debt alleviation, drug dependency, coercion, or a desire for wealth. Female commercial sex workers (FCSWs) generally are females who sell sex for money or in exchange of other transac tions. 1 The subSaharan Africa comprises a relatively higher proportion of FSCWs ranged between 0.7% and 4.3% prevalence in the capitals. 2 Studies revealed that sex work initiation among ado lescent sex workers were more determined by peer group pressure and sexual experimenting than the economic needs. 3,4 In Addis Ababa, Ethiopia, the girls began commercial sex work because of peer pressure, lack of employment opportunity, influence of traffickers, poverty, frequent disagreement in the family, death of parent(s), early marriage, and exploitation in the previ ous jobs. 4 A study in Amhara region by Ethiopian public health agency (EPHA) also indicated financial problems, conflict with people with whom they live, death of parents and personal interest as reasons for sex work engagement. 5 It is well known and accepted that com mercial sex workers (CSWs) are highly vulnerable to HIV infection and suitable transmission groups in the scope of the HIV epidemic. HIV/AIDS is commonly transmitted from an infected individual through semen and vaginal fluids during unpro tected sex without use of condoms. 6 In relation to this, sex workers are the most vulnerable groups for the contraction and transmission of HIV/AIDS not only due to their nature of work but also, because of sexual relationship with rap idly changing multiple partners. 6 HIV/AIDS knowledge of FSWs and the clients is associated with practice of protected sex. Lack of HIV/ AIDS comprehensive knowledge and selfprotection skill is number one determinant of the epidemic. In Ethiopia, central statistics agency (CSA), studies concluded that only around 40% of the most at risk populations both cor rectly identify ways preventing the sexual transmission of HIV epidemic and reject major misconceptions about HIV transmission. 7,8 Sex work is an extremely dangerous profession. The use of riskreduction options can help to safeguard sex workers' lives in the same way that drug users have bene fitted from druguse harm reduction. Sex workers are exposed to serious harms: drug use, disease, violence, dis crimination, debt, criminalization, and exploitation (forced sex without condom, child prostitution, trafficking for sex work, and exploitation of migrants), and so on. 9 Reducing the risk towards sex workers depend on a range of strate gies that includes: safety advice, awareness of potentially dangerous clients, assertiveness, and negotiating skills. Successful riskreduction strategies include peer educa tion, training in condomnegotiating skills, safety tips for streetbased sex workers, male and female condoms, empowerment, prevention, care, occupational health and safety, decriminalization of sex workers, and human rightsbased approaches. 10 Male condoms are one ways of reducing the risk of HIV and STI transmission in sex workers and prevent Sexually transmitted infection (STI) complications such as pelvic inflammatory disease. 11 A reliable and accessible supply of goodquality condoms is essential. Condom promotion, distribution, and social marketing result in increased con dom use and reduced STI and HIV infection rates, espe cially in female sex workers. 9 Local cultures, language, and traditions should also be considered. Female condoms have successfully prevented pregnancy and reduced STI transmission analytical studies, and there is in vitro evi dence and biological plausibility for HIV prevention. 11 Female condoms empower women by enabling them to negotiate safe sex, by promoting healthy behaviour, and by increasing selfeffectiveness and sexual confidence. 10 Experienced condom users are significantly less likely to have a condom slip or break compared with the first time users, although users who experience one slippage or breakage are more likely to suffer a second such failure. 12 A qualitative study conducted in Addis Ababa showed some sex workers never have sex without condom, but used two condoms with their clients-a practice that increases the chances of a condom tearing. 13 This study indicated that low levels of education and alcohol use also affect the like lihood of female sex workers using condoms. Sex work in Ethiopia is vast, diverse, and conducted openly. Sex workers operate in virtually all hotels, bars, and restaurants and there are street workers on most main roads of towns after sunset. Throughout towns and cities, sex workers and clients meet at informal bars that sell the local brew, Araki. Both sex and Araki is sold from single room households where women live alone or with their children. In some towns, these are clustered in slum areas where sex work is practiced explicitly by most of the resi dent women. While it is certainly true that commercial sex is freely available throughout the country and that stigma against sex workers is less pronounced than it is in some other countries, sex work certainly is stigmatized in Ethiopia. Sex workers commonly face significant stigmarelated barriers regardless of where they work, due to their per ceived violation of gendered norms through sex with multiple partners and strangers, taking sexual initiative and control, inciting male desires, and receiving fees for sex. 14 Both the literature and the information provided by women suggests that a combination of stigma associated with promiscuous sexual behaviour and poverty combine to incentivise girls and women to leave towns and rural settings for other towns or cities where they immediately or eventually, join the sex industry. 15 Moreover, in Ethiopia, public as well as scholarly perceptions of FSWs as a group have always portrayed them as social misfits and their supposed attitudes and behaviours are described always as inimical to society. More recently, under the threat posed by the HIV/AIDS epidemic, this identifica tion of sex work with social misfits imbued with danger ous social personalities and personal traits has become even more categorical and insistent. 16 Therefore, due to limited empirical studies on knowl edge and practice on HIV riskreduction options in Ethiopia, this study addressed both knowledge and prac tice on HIV risk reduction options, and it shows the rela tionship between some variables with HIV risk reduction options. Accordingly, this study tries to answer the follow ing research questions. 1. What are the associated factors for sex work initiation? 2. What is the current level of knowledge and practice of female sex workers on HIV risk reduction options? 3. How predictor variables, demographic variables, and behaviour of commercial sex are relationships with consistent condom use by female sex workers? Research design This study is a crosssectional study because, it is an appropriate approach to describe the associated factors for sex work initiation and to investigate the predictor varia bles, demographic variables, and behaviour of commercial sex are relationships with consistent condom use by female sex workers. Participants and sampling To be eligible for the study, women had to be over the age of 18, have a good understanding of Amharic (local lan guage), and work on different bars, hotels, and streets in Woldia town, Ethiopia. Participants were recruited between September and November 2021 from their workplace at night, Woldia health care centre and Woldia hospital, where they attended for three monthly clinical appointments for sexually transmitted infection testing checkup and certificate to work. Eligible participants were identified and approached by nurses during the tri age process. The actual number of women engaged in sex work in Woldia is anybody's guess. Statistics on this are highly impressionistic and imprecise. According to the data obtained from town administration, they estimated around 1400 FCSWs in Woldia town. I found it therefore both impossible and inaccurate to try to decide on a sample size on the basis of some hypothetical figure about the target population. The researcher decided to work with 10% of the above estimated number. Therefore, the sample size of this study was 140 FCSWs basically on the basis of resources and time available to researcher for this study. Due to the stigma associated with commercial sex, it was possible to use the simple random sampling technique to select respondents in their working areas and hospital or health care centres that would cooperate as respondents to reach the sample size. Instruments To collect data from participants of the study, question naires were used. Behavioural surveillance surveys (BSS), which have been adopted by WHO and UNAIDS, were modified to suit the Ethiopian context and used to collect the behavioural information from participants. 17 The questionnaire primarily of closeended questions, it con sists 15 items which measure knowledge of the respond ents on HIV/AIDS riskreduction options and 11 items measures practice of respondents on riskreduction options and risk factors to use condom as well as 5 items were designed to measure the behaviour of CSWs. All these items were translated into local Ethiopian language (Amharic) to facilitate communication. The questionnaire was pretested prior to the start of the survey. The coeffi cient alpha for the current study were .78, .71 and .69, respectively. Procedure The data were collected after giving clear explanations on the purpose of the study, and based on the consent of participants. The researcher personally collected the completed questionnaires from each participant. Statistical analysis The collected quantitative data were organized in the form of tables and analysed using frequency, percentage, and binary logistic regression. Ethical considerations Participation of respondents was strictly on a voluntary basis. Participants were fully informed as to the purpose of the study and consented. A written informed consent was taken prior to answering the questionnaire. All consent forms and questionnaires were marked only with a study number and no names were recorded anywhere. Measures were taken to ensure the respect, dignity, and freedom of each individual participating and to assure confidentiality in the study. Participants were informed that the informa tion they provide would be kept confidential and would not be disclosed to anyone else. Ethical approval and clear ance were obtained from Bahir Dar University, College of Education and Behavioural Science Institutional Review Board with the unique ethical approval number of BDU/ CEBS/045/20. Socio-demographic characteristics of the participants As Table 1 indicates that the majority of participants 102 (72.8%) belonged to the age group of 18-34 years. Regarding to educational status, 81 (57.9%) were attend ing primary schools. Among the study participants, 115 (82%) were from rural area to Woldia town. The majority of respondents 52 (37.1%) worked 4-8 years as sex workers. As indicated in Table 2, looking for a job 37 (26.4%) was the reason to come from other areas to Woldia town. Disagreement in the family, death of parents, peer pres sure, and attracted by town life were indicated as reasons by 28 (20%), 32 (22.9%), 17 (12.2%), and 15 (10.7%) of CSWs, respectively. The participants show that another associated factor for sex works initiation were early mar riage, to attend education, and others. Table 3 shows that all study participants have ever heard about HIV/AIDS and STI. About 137 (97.9%), 132 (94.3%), and 138 (98.6%) of respondents agreed that HIV is transmitted by sexual intercourse, from mother to child, and by sharing sharp materials, respectively. Moreover, 126 (90%) and 120 (85.7%) of participants agreed that HIV is not transmitted by sharing a meal with someone who is a person living with HIV and mosquito bite, respec tively. However, 17.2 % of respondents indicated that HIV is a curable disease. A significant amount of respondents 135 (96.4%), 134 (95.7%), and 130 (92.9%) agreed that a healthy looking person have HIV, condom prevents HIV, and they are at risk of HIV due to the nature of their work. From Table 4, it can be learned that 68 (48.6%), 130 (92.9%), and 119 (85%) of respondents indicates that check always a expire date of condoms while you buy and use, there are not use any form of lubricant while using condom and formally trained on how to insert male con dom, respectively. However, majority of respondents 84 (60%), 103 (73.6%), 131 (93.6%), and 86(61.4%) agreed on always myself to inserting male condom during sex with your client in past 12 months, had incident of condom breakage in the past 3 months, do not practices pair sex in the past 12 months with any client and tested for HIV in the past 3 months, respectively. A total of 124 (88.6%) of participants did not use female condom on the reason of uncomfortable. Moreover, 98 (70%) of participants showed that the clients use double/triple condom together at the same time during in the past 12 months. Binary logistic regression was performed to ascertain the effects of age, work experiences, training on condom use, alcohol use, and difficult to negotiate with clients to use condom and disapproval of clients to use condom on the consistent condom use of FCSWs are presented in Table 5. The logistic regression model was statistically sig nificant, χ 2 (6) = 29.4, p < .05. The model explained 86.0% (Nagelkerke R) 2 of the variance in consistent use of con dom. Age, alcohol use, difficult to negotiate with clients to use condom and disapproval of clients to use condom were significantly predictors of consistent uses of condom use of FCSWs (B = .36, p < .05), (B = .09, p < .05), (B = .43, p < .05), and (B = 1.95, p < .05) respectively. Discussion This study shows that there are pulling and pushing factors for sex worker engagement. Majority of participants' response for mobility from different rural areas to Woldia town was on the reason of looking for a job. Unlike to this study, the study in Addis Ababa, which revealed that 60% of CSW in Addis Ababa were women born and brought up in the city. 16 Moreover, this study indicates the disagree ment in the family, death of parents, peer pressure, and attracted by town life were indicated as reasons that forced sex work to leave their place of origin. Similarly, peer pressure as a major factor for females to be engaged in sex work in Addis Ababa followed by lack of employment, brokers, poverty at home and dispute with family. 4 The knowledge of CSWs on the ways of HIV/AIDS riskreduction options indicated that all of the study par ticipants have aware about of HIV/AIDS and STI. Furthermore, the majority of respondents agreed that HIV is transmitted by sexual intercourse, from mother to child, and by sharing sharp materials. A significant amount of respondents knew about the method of its prevention like, condom prevents HIV. This finding is supported by the study, which concluded that majority of the mostatrisk populations were aware of HIV/AIDS and correctly iden tify ways of preventing the sexual transmission of HIV and reject major misconception about HIV transmission. 7,8 This study further revealed that practices of CSWS on HIV/AIDS riskreduction options, included check always expire date of condoms while you buy and use, formally trained on how to insert male condom, inserting male con dom by themselves, and the incident of condom breakage were agreed by majority of respondents. Moreover, 70% of participants showed that the clients use double/triple condom together at the same time during in the past 12 months. In favour of this, the quantitative study in Addis Ababa, above half of respondents agreed with using dou ble/triple condom and condom break during sex in the past 3 months. 9 The other interesting findings of this study was from different variables; only age, alcohol use, difficult to nego tiate with clients to use condom, and disapproval of clients to use condom were significantly predictors of consistent uses of condom use of FCSWs. Consistent with this study, the study in Addis Ababa also strengthened the above idea by high levels of alcohol use by CSWs significantly affects condom use. 13 This study has also limitation: questionnaire contains selfreporting of knowledge, perception and riskreduction methods. Also the study design was crosssectional, which does not show cause and effect. Furthermore, the calcula tion of the selected sample size was not carried out as a part of the study. Conclusion The results obtained from this current study indicated that looking for a job, disagreement in the family, death of par ents, peer pressure, and attracted by town life were indi cated as a major pulling and pushing factors to mobility from different rural areas to Woldia town for sex worker engagement. This study on the knowledge of CSWs on the ways of HIV/AIDS riskreduction options indicated that majority of respondents have aware about of HIV/AIDS and STI; and they know about HIV is transmitted by sexual inter course, from mother to child, and by sharing sharp materi als. A significant amounts of respondents knew about the methods of HIV/AIDS prevention like, condom prevents HIV. The study also found the practices of CSWS on HIV/ AIDS riskreduction options. The practices included check always expire date of condoms while you buy and use, for mally trained on how to insert the male condom, inserting male condom by themselves, and the incident of condom breakage was agreed by majority of respondents. Regarding to the factors that affect consistent condom uses of CSWs were age, alcohol use, difficult to negotiate with clients to use condom and disapproval of clients to use condom. Recommendation Based on the conclusion made from the finding, the fol lowing recommendations are given: Life skills, education/ training (i.e. psychosocial skills, social skills, assertive ness, and so on) and existing strategies that enable sex workers to develop skills that help them resist the pres sures that come from their clients and to help to support themselves needs to be part of the intervention programmes by town HIV/AIDS Prevention and Control office HAPCO, Woldia University and other partners implement ing HIV/ADS prevention activities on CSWs. As it is seen in the result part, most of the respondents were engaged in sex work due to economic reasons. Thus, the town (HAPCO) and micro and smallindustry office need to organize FCSWs in association and assist them to be engaged in other income generating activities so that they can exit from sex work. Offering FSWs an additional choice may result in better protection. Female condoms, which are under the control of women that allow them to protect themselves or reduce risks, are needed to be acces sible for female sex workers in the town by HAPCO. Ethics approval and consent to participate Not applicable. Consent for publication Not applicable.
2022-08-13T06:17:24.727Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "701b3202ab696e455ca2e148e45e2ee1cd8d2342", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "424ac9c6c7c9e72e91c945ffa8b58fac1559b189", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7777997
pes2o/s2orc
v3-fos-license
Capacity utilization and the cost of primary care visits: Implications for the costs of scaling up health interventions Objective A great deal of international attention has been focussed recently on how much additional funding is required to scale up health interventions to meet global targets such as the Millennium Development Goals (MDGs). Most of the cost estimates that have been made in response have assumed that unit costs of delivering services will not change as coverage increases or as more and more interventions are delivered together. This is most unlikely. The main objective of this paper is to measure the impact of patient load on the cost per visit at primary health care facilities and the extent to which this would influence estimates of the costs and financial requirements to scale up interventions. Methods Multivariate regression analysis was used to explore the determinants of variability in unit costs using data for 44 countries with a total of 984 observations. Findings Controlling for other possible determinants, we find that the cost of an outpatient visit is very sensitive to the number of patients seen by providers each day at primary care facilities. Each 1% increase in patient through-put results, on average, in a 27% reduction in the cost per visit (p < 0.0001), which can lead to a difference of up to $30 in the observed costs of an outpatient visit at primary facilities in the same setting, other factors held constant. Conclusion Variability in capacity utilization, therefore, need to be taken into account in cost estimates, and the paper develops a method by which this can be done. Background Making the best use of available resources is vital in developing countries that are struggling to improve public health with limited funds. This has become even more urgent following their ambitious commitment to achieve the Millenium Development Goals (MDGs) and the realization that funding is not yet sufficient to allow interventions to be scaled up sufficiently to do so [1]. Consequently, demand for information on how much additional funding would be required to attain the MDGs has increased, and in response, a number of studies have tried to estimate the costs countries are likely to face in further scaling-up health interventions. Most current estimates are likely to be substantially incorrect, however, with perhaps the most important problem the assumption that the unit costs of delivering services -for exam-ple, the costs per visit to a primary health facility, or the costs of a day in hospital -will not change as coverage increases or as more interventions are delivered together [2,3]. This is most unlikely [4,5]. Increased utilization due to scaling up may have a positive or negative impact on unit costs, depending on the current level of capacity utilization at primary facilities. For example, in facilities functioning at less than full capacity, unit costs are likely to fall in the short term with increases in output, as more services are delivered by existing facilities -fixed costs are distributed over a larger number of recipients. But in the longer run, unit costs could rise if new facilities have to be built in sparsely populated areas or it becomes increasingly difficult to attract the remaining people in need to seek care. The likely existence of these "economies" and "diseconomies of scale" means that information on the current and expected levels of capacity utilization at different stages of scaling up is key to identifying the true costs of expanding population coverage. This information is rarely reported or collected, however, and even if it is available, there are no guidelines on how to take them into account when estimating unit costs at primary facilities [2,6]. Another limitation of current analyses is that the cost of an outpatient visit or inpatient day used to estimate overall costs are usually derived from a small number of health facilities or programs, sometimes only one [7,8]. This is likely to be misleading given the large variability in capacity utilization across facilities within the same countryby chance the studied facilities or programs might have higher, or lower, levels of capacity utilization than other facilities or programs, leading to an under-or over-estimate of national costs [9,10]. While this is an indisputable theoretical possibility, the question remains whether it will be important in practice. The main objective of this paper is to measure the impact of the level of capacity utilization, in this case patient load, on the cost of a visit to a primary health care facility. The paper will examine the extent of the variation in this cost due to variations in capacity utilization, and will derive a method that can be used to adjust unit costs for different levels of capacity use. This work is part of WHO-CHOICE project with the overall objective to estimate the costs and health impact of a large number of health interventions at different levels of efficiency and population coverage levels. For more detail about WHO-CHOICE methods and results see http://www.who.int/choice. Data Part of the unit cost data was obtained from a number of WHO-commissioned studies in a representative sample of facilities in countries where these data were particularly scarce, see Appendix 1 for the list of countries. In addition, data were extracted from manuscripts published in the available indexed search engines: Medline, Econlit, Social Science Citation Index, regional Index Medicus, Eldis (for developing-country data), Commonwealth Agricultural Bureau (CAB), and the British Library for Development Studies Databases [7,[10][11][12][13][14][15][16][17][18][19][20][21][22]. The search terms used were: "costs and cost analysis" and health centre or the abbreviations HC (health centre) or PHC (primary health centre) or outpatient care. The language sources searched were English, French, Spanish and Arabic; no Arabic study was found. Additional data were also obtained from a number of studies in the grey literature, from such sources as electronic databases, government regulatory bodies, research institutions, and individual health economists known to the authors [7,8,[11][12][13][14][15][16][17]19,. Data from all sources were entered in a standard dataextraction template, including all variables that may contribute to understanding the relationship between unit costs and their determinants. The cost per outpatient visit at primary care facilities was the dependent variable and Total 984 N = number of health facilities per country for which annual unit costs were obtained and included in the analysis. *unit cost data at least partly collected from commissioned studies the possible explanatory variables included: ownership; total number of outpatient visits; types of costs included in the original cost study (e.g., capital, drugs, laboratory and diagnostics); whether reported costs were based on costs or charges; the total number of full time equivalent health care providers at the facility; the reference year for cost data; the currency; and the methods the costing studies had used to allocate joint costs. Data on the number of outpatient visits and the number of providers were used to calculate the indicator of capacity utilisation -the average number of visits per provider per day, if this was not readily reported in data sources. The number of providers was the full time equivalent number of staff, regardless of skill, who examined or treated patients. Data were available for 44 countries with a total of 984 observations. See Appendix 1 for the list of countries. In addition, information on aggregate variables reflecting socio-economic or other characteristics that may explain part of the variability in unit costs was also collected. The variables included GDP per capita [53], which has been used as a proxy for the level of technology [9,10,54-56], labour productivity [57], and the overall level of demand for health care in different studies [58]. Population density [59], which controls for access-related efficiency gains or losses due to geographical and demographic characteristics of various settings was also included. Finally, dummy variables indicating whether a country was an oil producer (i.e. OPEC member) or if the country had a communist regime either now or in the recent past, were also used. In the former case, it might be that costs are higher than would be expected from the level of GDP per capita alone because of inflows of foreign exchange and foreign workers. In the latter, cost levels might be lower than expected due to the historical ability of these countries to control prices and wages. Prior to the analysis, consistency checks were performed and questionable data queried with the study authors, or omitted if explanations could not be found. Finally, costs were converted to 2000 US dollars using GDP deflators and official exchange rates [60]. STATA software was used for analysis [61]. Data imputation Before model selection, potential variables for inclusion in the analysis were explored for missing data. Only two variables were concerned, the number of visits per provider per day and the total number of annual visits, where data was missing in 70% and 18% of the observations, respectively. Although the percent of missing data in the former was relatively high, we decided that the bias introduced by restricting the analysis to those observations with complete data would be larger than that caused by imputing missing data combined with appropriate uncertainty analysis [62]. A requirement for using imputation methods is that data are missing at random, which we believe is the case here, since the main reason data are not reported is that it is not yet standard practice in the costing literature. Multiple imputation techniques are the most suitable for our case, where the observed values for other settings, as well as relevant covariates, are used to predict a distribution of likely values for the unobserved data. It also allows subsequent analysis to take account of the level of uncertainty surrounding each imputed value [63][64][65][66]. The statistical model used for multiple imputation is the joint multivariate normal distribution, using Amelia software [64,[67][68][69]. One of its main advantages is that it produces reliable estimates of standard errors, and through the introduction of random error into the imputation process, it considerably reduces potential biases in the imputed data [63]. Detail of the estimation process and handling of the model output can be found elsewhere [10]. Model specification Empirical cost function studies -i.e. studies that relate unit costs to the level of output -have been mainly interested in estimating hospital costs. None to our knowledge have focused on primary care facilities. We followed the basic approach used to estimate hospital cost functions by Lombard et al (1991) and and (2006) [9,10,70]. The relationship between the cost per visit and the level of capacity use, as well as other possible determinants, was explored using multiple regression analysis -Ordinary Least Squares (OLS) was used. The dependent variable and all continuous explanatory variables explored in this model were transformed into natural logarithms, as this specification resulted in a residual plot that best approximated a normal distribution -a requirement of OLS regressions. Natural logs have the added advantage that coefficients can be readily interpreted as elasticities, offering a straightforward measure of the impact of capacity utilization on costs, the main focus of this analysis [71,72]. In addition, robust estimation methods were used, using the "robust" command in STATA [61], to control for clustering resulting from the inclusion of multiple observations per country in the study [73]. http://www.resource-allocation.com/content/6/1/22 estimated parameters; the X i are the explanatory variables described earlier, transformed into natural logarithms for continuous variables [60]; and e denotes the error term. The cost of an outpatient visit is expected to be positively correlated with GDP per capita; the inclusion of capital, ancillary (laboratory and other diagnostic tests) or drug costs in the original costing; and whether the country produced oil. We expected costs to be negatively correlated with the number of visits per provider per day, our variable of interest, and population density; and lower in public compared to private facilities and in countries that had been under communist regimes. Interaction terms were also tested, such as the interaction between capacity utilization and GDP per capita. Only variables that were consistently significant in the different models were included in the final model that was selected based on econometric grounds. Finally, to estimate the value of the unit cost per outpatient visit that would be expected for given values of the independent variables, the estimated dependent variable was re-transformed from logarithms to natural units using the Duan smearing factor [74]. The Duan smearing factor is used because one of the implicit assumptions of using log-transformed models is that the least-squares regression residuals in the transformed space are normally distributed. In this case, back-transforming to estimate unit costs gives the median and not the mean. The smearing method described by Duan (1983) corrects for the back transformation bias [74]. This was done by multiplying the anti log of the product of the model by 1.45, the smearing correction factor derived from our model. Model-fit Various regression diagnostics were used to judge the goodness-of-fit of the model. They included residual plots of the residual versus fitted values, "hettest" to test heteroskedasticity of the model variables, the variance inflation factors to test for multicollinearity, and estimates of adjusted R square and F statistics of the regression model [61]. Sensitivity analysis Sensitivity of the results to imputation of missing data was explored by running the models with and without imputation. Table 1 shows the variable names, description and results of the model with the best statistical fit. The adjusted Rsquare of the combined regressions from the five imputed datasets is 0.52, with an F statistic of 258 (p < 0.0001). All other regression diagnostic showed a good fit; the variance inflation factors ranged between 1.27 and 1.30 (VIF more than 20 indicates multicollinearity) [61] and the residual plots had a mean of zero with no specific pattern of distribution. Results The signs of the coefficients are consistent with our hypotheses; the cost per visit is positively correlated with GDP per capita and the inclusion of capital costs, [10] while the number of visits per provider per day; communist or ex-communist countries; and public as opposed to private ownership of facilities, are associated with a lower cost per visit. The other independent variables did not have a statistically significant impact on costs for our data set. The elasticity of cost per visit to changes in GDP per capita, while positive, is less than one (<0.0001). This means that while outpatient costs per visit are higher in countries with higher levels of GDP per capita, they increase at a slower rate than the rise in GDP. This is consistent with previous findings of the relationship between unit cost of hospital care and GDP per capita [10]. In terms of capacity utilization, the results show that each 1% increase in the number of patients seen per provider per day is associated with 27% reduction in the cost per visit, everything else kept constant (<0.0001). The sensitivity of the results to the imputation of missing data was explored. The signs and order of magnitude of the coefficients were stable with or without amputation, see Table 2. Figure 1 plots the predicted values from the model against the unit cost data and the level of GDP per capita. The two lines represent the predicted values of the cost per visit (in natural logs), estimated for a public facility with an average capacity use set arbitrarily at 25 visits per provider per day, including capital costs and estimated separately for communist and non-communist countries. The figure confirms that the model has a reasonable fit with the data and illustrates the considerable variability in the observed unit costs within a single country (each column of dots represents a country with a specific GDP per capita). To isolate the impact of the level of capacity utilization on unit costs, we re-estimated the predicted values allowing the capacity level to vary but keeping all other variables constant, including GDP per capita. This is illustrated in Figure 2, which shows the relationship between changes in capacity utilization (x axis) and the level of unit cost per outpatient visit (Y axis), estimated for three settings with different income levels, set at US $1000, $5000 and $20000 for illustration purposes. The figure shows that changes in capacity use can lead to a difference of between $5 and $30 in the estimated costs of an outpatient visit. The estimated costs of scaling up interventions could, therefore, be substantially different depending on the level of capacity utilization that happened to be associated with the data used for the costs of outpatient care. Discussion and policy relevance This paper presents critical evidence on the extent of variability in the cost of a patient visit at primary facilities within and across countries, and the proportion that can be explained by variations in patient load as well as other determinants. While a substantial portion of the observed variability could be explained by the specified determinants, some unexplained variability remained, possibly linked to variables that we could not measure including quality of care, case mix and salary differentials for staff working in remote areas. These variables are likely to explain part of the variability in the observed unit cost data but we did not have the data to explore this. There are other limitations of this type of analysis that must also be considered when interpreting the results. While the model incorporates a very extensive database on unit costs, much larger than has previously been available, it is always preferable to include more data points. In this case, increasing the number of countries for which observations were available, and having more information on possible explanatory variables, would increase the explanatory power of the model and the validity of the results for extrapolation to a wider number of countries. We also recognize that the mathematical specification of the model we report here, the log-log form, does not allow the identification of diseconomies of scale if they exist. Cross country studies like this typically use this functional form, which can be interpreted as the downward sloping part of a long-run cost curve. It is possible, as we stated in the introduction, that some countries will face diseconomies of scale if, for example, they have to build new health facilities in isolated areas, and these facilities are not fully utilized. In that case, the higher unit costs of the new facilities can still be estimated from our model -by using the country's observed GDP per capita, for example, and the lower level of capacity utilization associated with the expansion of facilities. Estimating the likely capacity utilization rates associated with the expansion of health facilities to increasingly remote areas is, of course, complex but some experience exists using spatial models to identify the population's physical accessibility to different possible locations of new health facilities [75]. Bearing in mind these limitations, we can still be confident of a number of important conclusions. Firstly, the results show that unit costs are very sensitive to the number of patients seen by providers each day -each 1% This means that estimates of the costs of scaling up, and the resulting estimates of financial needs, that are based on outpatient visit costs taken from a single, or a few studies, could be markedly wrong. These studies could well have capacity utilisation rates that are atypical of the country as a whole. Moreover, they will also be wrong if they do not allow the cost of an outpatient visit to change as coverage increases. Because most of the studies of the costs of scaling up to meet the MDGs do not even report the information on capacity utilization used to derive their outpatient costs estimates, readers can have little confidence that the overall costs that they estimate are even approximately correct. There are two additional practical uses of the analysis reported in this paper. The first is to apply the model to analyse and adjust locally available unit cost estimates, taking into account differences in capacity use and other determinants. The second is to use the results of the model to estimate the likely unit cost per visit at different levels of capacity use in settings where information on unit costs is not available. There have been several applications of the latter, including estimating the cost-effectiveness of a large set of interventions as part of the WHO-CHOICE [76,77] and the Disease Control Priorities (DCP) projects [78]; and estimating the cost of scaling up health interventions to meet universal coverage of key interventions to address major disease burden such HIV/AIDS [62,79], and interventions to improve maternal and child health [80][81][82]. Finally, our findings have important implications for the transferability and validity of costing and cost-effective-Predicted values (regression lines) for communist and non-communist countries plotted against the natural log of GDP per capita (X axis) Figure 1 Predicted values (regression lines) for communist and non-communist countries plotted against the natural log of GDP per capita (X axis). (Y-axis shows the raw data for cost per visit in natural logs) N = 984. ness results. General policy decisions should not be based on the results of costing studies that do not report capacity utilization or that base the analysis of the cost of scaling up on current costs of providing care. Impact of patient load on unit cost per visit in three settings
2014-10-01T00:00:00.000Z
2008-11-13T00:00:00.000
{ "year": 2008, "sha1": "e15867a37a8de17ff1e17480fadc538a1baa4f8f", "oa_license": "CCBY", "oa_url": "https://resource-allocation.biomedcentral.com/track/pdf/10.1186/1478-7547-6-22", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e15867a37a8de17ff1e17480fadc538a1baa4f8f", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [] }
54547374
pes2o/s2orc
v3-fos-license
Two cases of plasmacytoid variant of bladder cancer Introduction: Plasmacytoid urothelial carcinoma (PUC) is a rare and aggressive variant of bladder cancer. Since PUC is infrequently encountered, its management continues to be a formidable challenge. Case 1: A 74-year-old gentleman was admitted with a 1-month history of gross hematuria and urinary overflow incontinence. Cystoscopy revealed an abnormal growth at the bladder base and biopsies were taken. This patient had high grade PUC sparing the muscularis propria (MP). He underwent nephrostomy tube insertion and subsequent conversion to internal stents 8 weeks later. Repeat resection showed muscle-invasive PUC. Repeat CT showed subcentimeter bilateral inguinal and external iliac lymphadenopathy. The surgeon discovered a dense infiltrative reaction within the retroperitoneum and colonic obstruction during radical cystoprostatectomy and palliative colostomy, confirming metastatic PUC. Case 2: A 72-year-old gentleman presented with gross hematuria, bleeding from an established end ileostomy, and bowel obstruction symptoms. Cystoscopy identified an anterior bladder tumour and CT staging confirmed a small bowel obstruction (SBO). Transurethral resection showed PUC with invasion of the MP and lymphovascular space. Obvious positive margins were identified during radical cystoprostatectomy, as the tumour clearly invaded bone and the rectal stump. Ileostomy output ceased post-operatively on day 4, and bowel contents began leaking through his urethra the following day. Reassessment of goals of care resulted in cessation of significant interventions. Unfortunately, he passed away 2 weeks post-operatively due to sepsis. Discussion: These cases illustrate the aggressiveness of PUC bladder tumours and how imaging frequently under-stages these patients. Neoadjuvant chemotherapy prior to attempting surgical control may be beneficial. introduction Bladder cancer is the ninth most common cancer worldwide.1 Approximately 430 000 incident cases of bladder cancer, predominantly in men, were reported worldwide in 2012.2 Interestingly, the incidence and mortality rate of bladder cancer is higher in more developed countries, which is associated with the higher prevalence of tobacco use in these regions.1 The risk of developing bladder cancer is associated with genetic variants, such as mutations in N-acetyltransferase 2, and environmental exposures, primarily tobacco smoke.3 Tobacco smoke contains aromatic amines and hydrocarbons, which are renally excreted and are carcinogenic to the urinary system.3 Tobacco smoking is the most important and modifiable risk factor and may contribute to half of all bladder cancers.4 Other environmental exposures include analgesic overuse, occupational exposures, and chronic Schistosoma haematobium cystitis.3 Urothelial carcinoma (UC) accounts for 90% of all bladder cancer cases in the United States and Western Europe.5 Invasive UC often differentiates into specific cell types and variants. Immunohistochemical identification of these subtypes is critical due to their potentially aggressive nature. These UC subtypes may warrant novel approaches to management.6 Plasmacytoid urothelial carcinoma (PUC) is rare variant, accounting for 2.7% of UC.7 Histologically, it appears as discohesive cells with eccentric nuclei and abundant eosinophilic cytoplasm in a single cell growth pattern.8 It is generally diagnosed at advanced stages and is associated with poorer survival rates than conventional UC.9,10 Treatment of PUC with transurethral resection or open cystectomy +/adjuvant chemotherapy has been described in the literature. More recently, clinicians have considered the role of neoadjuvant chemotherapy in PUC management due to its aggressive nature.11-14 In this report, we describe the cases of two patients with PUC who presented with macroscopic hematuria and were diagnosed with PUC of the bladder. They offer important insights into how patients should be counseled and managed differently than those with common UC. case reports Case 1: A 74-year-old gentleman with a history of noninsulin dependent type 2 diabetes mellitus, hypertension, atrial fibrillation, dyslipidemia, gout, and hypothyroidism was admitted to the medicine service after a fall and upper gastrointestinal bleed. He was a lifelong non-smoker and drinks minimal alcohol. Upper endoscopy revealed a mass, which was a presumed gastrointestinal stromal tumour. He developed an acute-onchronic kidney injury with a creatinine of ~500 μmol/L. Renal ultrasound and computed tomography (CT) of the abdomen/pelvis revealed bilateral hydronephrosis and a stone in distal left ureter, which was later treated with laser lithotripsy. The patient had a month-long history of gross hematuria and urinary overflow incontinence. Abnormal urothelium at the bladder base was biopsied during investigation of the hydronephrosis. Surprisingly, this biopsy showed invasive high-grade PUC, with involvement of the lamina propria, while the muscularis propria remained clear. Ureteral stents could not be inserted since the ureteric orifices were not visualized; consequently, the patient required nephrostomy tubes. The patient underwent repeat transurethral resection 8 weeks later with an attempt to convert the nephrostomy tubes to internal stents. Repeat resection identified muscle-invasive PUC, demonstrating further progression of his disease. Two weeks later, he presented to the emergency department with malaise and fatigue. Basic bloodwork revealed a serum potas- Two cases of plasmacytoid variant of bladder cancer Stacy Fan Faculty Reviewer: Dr Nicholas Power, MD, FRCSC (Department of Urology) abstract Introduction: Plasmacytoid urothelial carcinoma (PUC) is a rare and aggressive variant of bladder cancer.Since PUC is infrequently encountered, its management continues to be a formidable challenge. Case 1: A 74-year-old gentleman was admitted with a 1-month history of gross hematuria and urinary overflow incontinence.Cystoscopy revealed an abnormal growth at the bladder base and biopsies were taken.This patient had high grade PUC sparing the muscularis propria (MP).He underwent nephrostomy tube insertion and subsequent conversion to internal stents 8 weeks later.Repeat resection showed muscle-invasive PUC.Repeat CT showed subcentimeter bilateral inguinal and external iliac lymphadenopathy.The surgeon discovered a dense infiltrative reaction within the retroperitoneum and colonic obstruction during radical cystoprostatectomy and palliative colostomy, confirming metastatic PUC. Case 2: A 72-year-old gentleman presented with gross hematuria, bleeding from an established end ileostomy, and bowel obstruction symptoms.Cystoscopy identified an anterior bladder tumour and CT staging confirmed a small bowel obstruction (SBO).Transurethral resection showed PUC with invasion of the MP and lymphovascular space.Obvious positive margins were identified during radical cystoprostatectomy, as the tumour clearly invaded bone and the rectal stump.Ileostomy output ceased post-operatively on day 4, and bowel contents began leaking through his urethra the following day.Reassessment of goals of care resulted in cessation of significant interventions.Unfortunately, he passed away 2 weeks post-operatively due to sepsis. Discussion: These cases illustrate the aggressiveness of PUC bladder tumours and how imaging frequently under-stages these patients.Neoadjuvant chemotherapy prior to attempting surgical control may be beneficial. introduction Bladder cancer is the ninth most common cancer worldwide. 1pproximately 430 000 incident cases of bladder cancer, predominantly in men, were reported worldwide in 2012. 2 Interestingly, the incidence and mortality rate of bladder cancer is higher in more developed countries, which is associated with the higher prevalence of tobacco use in these regions. 1 The risk of developing bladder cancer is associated with genetic variants, such as mutations in N-acetyltransferase 2, and environmental exposures, primarily tobacco smoke. 3Tobacco smoke contains aromatic amines and hydrocarbons, which are renally excreted and are carcinogenic to the urinary system. 3Tobacco smoking is the most important and modifiable risk factor and may contribute to half of all bladder cancers. 4Other environmental exposures include analgesic overuse, occupational exposures, and chronic Schistosoma haematobium cystitis. 3rothelial carcinoma (UC) accounts for 90% of all bladder cancer cases in the United States and Western Europe. 5Invasive UC often differentiates into specific cell types and variants.Immunohistochemical identification of these subtypes is critical due to their potentially aggressive nature.These UC subtypes may warrant novel approaches to management. 6lasmacytoid urothelial carcinoma (PUC) is rare variant, accounting for 2.7% of UC. 7 Histologically, it appears as discohesive cells with eccentric nuclei and abundant eosinophilic cytoplasm in a single cell growth pattern. 8It is generally diagnosed at advanced stages and is associated with poorer survival rates than conventional UC. 9,10 Treatment of PUC with transurethral resection or open cystectomy +/-adjuvant chemotherapy has been described in the literature.][13][14] In this report, we describe the cases of two patients with PUC who presented with macroscopic hematuria and were diagnosed with PUC of the bladder.They offer important insights into how patients should be counseled and managed differently than those with common UC. case reports Case 1: A 74-year-old gentleman with a history of noninsulin dependent type 2 diabetes mellitus, hypertension, atrial fibrillation, dyslipidemia, gout, and hypothyroidism was admitted to the medicine service after a fall and upper gastrointestinal bleed.He was a lifelong non-smoker and drinks minimal alcohol.Upper endoscopy revealed a mass, which was a presumed gastrointestinal stromal tumour.He developed an acute-onchronic kidney injury with a creatinine of ~500 µmol/L.Renal ultrasound and computed tomography (CT) of the abdomen/pelvis revealed bilateral hydronephrosis and a stone in distal left ureter, which was later treated with laser lithotripsy. The patient had a month-long history of gross hematuria and urinary overflow incontinence.Abnormal urothelium at the bladder base was biopsied during investigation of the hydronephrosis.Surprisingly, this biopsy showed invasive high-grade PUC, with involvement of the lamina propria, while the muscularis propria remained clear.Ureteral stents could not be inserted since the ureteric orifices were not visualized; consequently, the patient required nephrostomy tubes.The patient underwent repeat transurethral resection 8 weeks later with an attempt to convert the nephrostomy tubes to internal stents.Repeat resection identified muscle-invasive PUC, demonstrating further progression of his disease. Two weeks later, he presented to the emergency department with malaise and fatigue.Basic bloodwork revealed a serum potas-sium of 6.9 mmol/L, anion gap of 26, and serum creatinine of 1213 µmol/L.Ultrasound showed persistent bilateral hydronephrosis despite bilateral ureteric stents in situ.Urinalysis revealed a leukocyte count of 500, nitrites, and large amount of blood.The patient was admitted to the urology service for pseudomonas-positive and vancomycin-resistant enterococcus-positive urinary tract infection and underwent bilateral nephrostomy tube insertion.Appropriate antibiotics were started. During this second admission, CT of the chest/abdomen/pelvis only showed subcentimeter bilateral inguinal and external iliac lymphadenopathy.The patient consented to radical cystoprostatectomy.Intraoperatively, however, a dense infiltrative and desmoplastic reaction was noted in the retroperitoneum, along with distal colonic obstruction.In other words, the patient had unresectable metastatic disease, despite a lack of obvious evidence of metastases on imaging.The sigmoid colon was resected and an end colostomy was created for palliation.Pathology confirmed metastatic urothelial carcinoma in the resected bowel.At the time of writing, the patient was due to begin palliative chemotherapy with gemcitabine/ carboplatin (due to poor renal function) with concomitant radiation for symptom control. Case 2: A 72-year-old gentleman presented to a community hospital with gross hematuria and bleeding from his end ileostomy, which was created after a subtotal colectomy for inflammatory bowel disease several years prior.He also reported a 1-week history of constipation and obstipation.He had a 4-year remote smoking history and negative family history for bladder, kidney, or prostate cancers. Cystoscopy was performed, and a bladder tumour was identified.CT staging showed a small bowel obstruction.The patient underwent transurethral resection of the large anterior bladder tumour.Histology showed PUC with infiltration of the lamina propria and muscularis propria, as well as lymphovascular space invasion.Unfortunately, resection was incomplete due to the extent of the tumour. At this point, the patient was transferred to our tertiary care centre for surgical management of the small bowel obstruction (which had failed conservative treatment) and radical cystoprostatectomy for PUC.There were obvious positive margins intraoperatively as the surgeon cut through tumour along bone anteriorly and along the rectal stump posteriorly.The patient was stable for the first few days post-operatively with good ileostomy output.However, ileostomy output stopped completely by day 4. On day 5, he developed leakage of bowel contents through his urethra, leading to high suspicion that a fistula had formed between the small bowel and urethra.Subsequently, he became febrile and received 9 days of broad-spectrum antibiotics before deciding to withdraw from significant interventions.Unfortunately, the patient passed away 2 weeks post-operatively, secondary to sepsis. discussion The first case of PUC was reported by Sahin et al in 1991. 15It was not until 2004 that the World Health Organization included this rare variant in the classification of UC. Li et al found that patients with PUC who underwent radical cystectomy presented more frequently with advanced tumour stage and positive lymph nodes, and were more likely to receive neoadjuvant chemotherapy as compared to those with regular UC. 16 Additionally, positive surgical margins were over 5 times more likely in the PUC group compared to UC. Median overall survival was reduced in the PUC group compared to pure UC (3.8 years vs. 8 years, respectively).Cockerill et al found similar results, with increased extravesical disease and positive margins in PUC as compared to regular UC. 17 PUC was associated with decreased overall survival, cancer-specific survival and local recurrence-free survival at 5-year follow-up.Another study found that PUC patients presented with higher stage at cystectomy, had increased risk of lymph node involvement, and had increased rates of positive surgical margins in comparison to regular UC. 18 Importantly, they noted transurethral resection of PUC was associated with more disseminated disease, and cystectomy of PUC was associated with doubled mortality risk as compared to regular UC.0][21][22] PUC has a tendency for intraperitoneal spread, frequently discovered at cystectomy. 23,24There are currently no prospective randomized controlled trials that have investigated optimal treatment of PUC due to the rarity of this subtype.Consequently, management is largely guided by case reports and retrospective studies, with no standard treatment regimens. These two cases demonstrate how rapidly and aggressively PUC can metastasize to potentially cause obstructive symptoms and how imaging may be imprecise.Since PUC spreads through single cells, it is rare for CT to accurately stage these patients. 8Therefore, it may be beneficial to consider neoadjuvant systemic therapy prior to surgical attempts.Early delivery of chemotherapy may destroy locally metastatic cells preventing any distant spread and may limit the inflammatory response associated with locally advanced disease to increase surgical success. There are limited studies that have investigated the role of neoadjuvant chemotherapy in PUC.Kohno et al was the first to describe successful pathologic response after two cycles of neoadjuvant chemotherapy (cisplatin, etoposide, methotrexate and vinblastine) and radical cystectomy, with no evidence of recurrence at 3-year follow-up. 11In 2011, Hayashi et al treated a patient with metastatic PUC with neoadjuvant cisplatin and gemcitabine, then radical cystectomy. 12Although patients from both reports remained disease-free, death occurred at 3 years 11 and 9 months 12 , respectively.Dayyani et al used cisplatin, doxorubicin, methotrexate, vinblastine regimens in 12 patients with PUC.Many patients demonstrated pathologic improvement, but the authors did not identify a survival difference between receiving neoadjuvant chemotherapy and initial radical cystectomy. 13nclusion These case reports emphasize the importance of identifying PUC because of the unique therapeutic and prognostic implications this UC variant carries.Early diagnosis is critical but difficult in these patients due to its aggressive yet insidious nature.An additional obstacle is that PUC spreads via individual cells. 8Conse-quently, CT drastically under-stages locally advanced disease.Radical cystectomy and lymph node dissection, the standard of care for simple UC, is extremely difficult in these patients and is often unsuccessful (with positive surgical margins when attempted) due to dense desmoplastic response and fibrosis.Therefore, consideration for neoadjuvant systemic therapy prior to attempts at surgical extirpation is highly warranted in this rare disease.
2018-12-03T20:32:40.255Z
2018-01-20T00:00:00.000
{ "year": 2018, "sha1": "7b7a76d73883f5d9ef57e891959d83f0dc86602d", "oa_license": "CCBY", "oa_url": "https://ojs.lib.uwo.ca/index.php/uwomj/article/download/2111/1412", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b7a76d73883f5d9ef57e891959d83f0dc86602d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
12104559
pes2o/s2orc
v3-fos-license
METALLIC STAPLES LINE MIMICKING A RETAINED SURGICAL SPONGE called gossypibomas or textilomas) are a rare but well recognized and dreaded complication of surgery. Several authors have highlighted the inconsistency and variability of the radiologic appearances of intrathoracic gossypibomas (1, 2). In this report we describe a peculiar case where, paradoxically enough, modern imaging techniques introduced unanticipated confounding factors into this diagnostic problem. Retained surgical sponges (also called gossypibomas or textilomas) are a rare but well recognized and dreaded complication of surgery.Several authors have highlighted the inconsistency and variability of the radiologic appearances of intrathoracic gossypibomas (1,2).In this report we describe a peculiar case where, paradoxically enough, modern imaging techniques introduced unanticipated confounding factors into this diagnostic problem. Case report A 57-year-old man with a medical history of myotonic dystrophy type 1 (Steinert disease) underwent right upper lobectomy and systematic lymphadenectomy for a 20-mm well differentiated neuroendocrine carcinoma with lobar and interlobar lymph nodes involvement, staged as a T1N1 tumor.Multiple firings (n = 4) of a gastrointestinal anastomosis stapler with a 55 mm 3.85 mm load of titanium staples were used to complete both minor and major fissures.The surgical procedure was straightforward: no safety-compromising events known to be associated with a risk of retention of a foreign body occurred (3,4) and repeated counting procedures of instruments and surgical sponges were correct.On postoperative day 2, hemodynamic compromise and acute respiratory failure developed following aspiration of gastric contents.The patient underwent endotracheal intubation and mechanical ventilation.Antero posterior portable chest radiographs demonstrated opacification retained surgical sponge was found and a dark bluish zone of hepatization in the middle lobe, bordered by the line of the staples used to complete the minor fissure, was removed.Pathologic evaluation showed recent hemorrhagic infarction.The postoperative course was complicated by persistent disorders of deglutition and impaired cough mechanism.The patient was discharged to a long-term care facility with tracheostomy and feeding jejunostomy on postoperative day 35. Discussion Despite numerous published reports, there is little consensus in the literature on the incidence of retained surgical sponges, with estimates ranging from 1 in 1.000 to 1 in over 18.000 procedures (3).Most of the right paratracheal zone which was interpreted as postoperative pulmonary bleeding and inadequate ventilation of the middle lobe (Fig. 1).However, owing to a poor response to treatment, a multi-detector computed tomography (MDCT) scan was obtained on postoperative day 10, which showed a complex mass in the right upper hemithorax.A serpiginous linear opacity of metallic density was seen within the mass (Fig. 2).The findings were thought most likely to represent a retained surgical sponge and thoracotomy for removal was recommended.At operation, no JBR-BTR, 2010, 93: 262-263.likely, the difficulty in ascertaining the true incidence of retained surgical sponges results from the lack of established reporting systems for such adverse occurrences, the concern for potentially serious medicolegal implications, and the number of asymptomatic retained surgical sponges remaining undiscovered for years or decades (5).Indeed, the inadvertent loss of surgical sponges continues to be a dreadful hazard of surgery. METALLIC STAPLES LINE MIMICKING A RETAINED SURGICAL SPONGE Surgical sponges are routinely supplied with radiopaque markers allowing them to be readily recognized on imaging studies.However, in the early postoperative period, proper identification of radiopaque sponge markers on plain radiographs is hampered by several factors including location and orientation of the foreign body, marker distortion by folding or twisting, and presence of metallic artifacts from surgically placed staple lines or clips (6)(7)(8).Besides, detection of radiopaque sponge markers may not be as easy as claimed owing to the less than ideal diagnostic quality of anteroposterior portable chest film obtained in the perioperative period. In our case, also retrospectively, was the staple lines specific pattern not recognizable in bedside radiography. Finally, the most CT characteristic signs for retained surgical sponges, namely a spongiform or whirl-like pattern with gas bubbles in a mass with a thin enhancing capsule, may be absent or may be confused with loculated infection and organized hematoma or seroma (9).All of these factors can lead to diagnostic dilemmas and misinterpretations. Recent advances in MDCT technology have greatly improved the quality of three-dimensional (3D) renderings.However, combining our findings with those of two prior case reports (7,8) suggests that the reconstruction of images by using maximum intensity projection (MIP) technique, besides lacking accurate 3D perspective, results in increased opacity of the line of the mechanical staples used to complete interlobar fissures, which simulates the radiopaque marker of a supposedly retained surgical sponge.In retrospect, closer examination of additional images generated with both MIP and volume rendering techniques (Fig. 2B,C) or the creation of cine loops from MIP images in multiple planes revealed the interrupted pattern of metallic density, consistent with the multiple firings of the gastrointestinal anastomosis stapler used to separate the lobes.On the other hand, disintegration of the radiopaque marker embedded in a retained surgical sponge was to be excluded in view of the short time that had elapsed since surgery.We reiterate the opinion of others (6, 10) that optimal use of imaging techniques (native axial CT data and multiplanar reformation with MIP for the reconstruction of two-dimensional images, and volume rendering for the creation of 3D images) is required for achieving a confident understanding of equivocal imaging findings.Besides, referral to a photoradiographic atlas of surgical items that can be left, either intentionally or unintentionally, in surgical wounds would contribute to making a prompt diagnosis and facilitating appropriate treatment. L. Cardinale 1 , C. Fava 1 , N. Dervisci, D. Najada 1 , P. Borasio 2 , F. Ardissone 2The inadvertent loss of surgical sponges remains a dreadful hazard of surgery.We report the case of a patient with a medical history of myotonic dystrophy type 1 who had received a right upper lobectomy for the treatment of a stage IIA (pT1N1M0) well differentiated neuroendocrine carcinoma.In the early postoperative period, aspiration of gastric contents occurred and the patient underwent endotracheal intubation and mechanical ventilation.A follow-up multidetector computed tomography (MDCT) scan of the chest showed a complex mass in interlobar position with an internal radiopaque serpiginous thread of metallic density which was assumed to represent a retained surgical sponge.Upon surgical exploration, no retained foreign body was found and a zone of recent hemorrhagic infarction, bordered by the line of the mechanical staples used to complete the minor fissure, was removed from the middle lobe.When evaluating patients suspected of having a retained surgical sponge, thoracic surgeons and radiologists should be aware of this potential source of confusion.Key-word: Foreign bodies.From: 1. Radiology Unit, 2. Thoracic Surgery Unit, University of Turin, Department of Clinical & Biological Sciences, Orbassano, Turin, Italy.Address for correspondence: Dr L. Cardinale, University of Turin, Department of Clinical & Biological Sciences, Radiology Unit, San Luigi Hospital, I-10043 Orbassano (Torino), Italy.E-mail: luciano.cardinale@gmail.com Fig. 1 . Fig. 1. -Anteroposterior portable chest radiograph obtained on postoperative day 8 shows opacification of the right paratracheal zone.Note also a pleural fluid collection in the left lower lung zone. Fig. 2 . Fig. 2. -MDCT.Coronal MIP image (A) shows a heterogeneous mass in the right upper hemithorax with an internal radiopaque thread of metallic density which was assumed to be a retained surgical sponge.Additional sagittal oblique MIP (B) and volumerendered (C) images show the interrupted serpiginous pattern of metallic density, compatible with the multiple firings of the staple device used to divide interlobar fissures.
2018-04-03T00:28:14.115Z
2010-05-06T00:00:00.000
{ "year": 2010, "sha1": "4449a263a4416d887d032fbba4640a520e80f279", "oa_license": "CCBY", "oa_url": "http://www.jbsr.be/articles/10.5334/jbr-btr.332/galley/329/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e5d879cd938f6afe91e8061ca3d44f07cf9e2237", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13938035
pes2o/s2orc
v3-fos-license
Comparing United States and Canadian population exposures from National Biomonitoring Surveys: Bisphenol A intake as a case study The Centers for Disease Control and Prevention provides biomonitoring data in the United States as part of the National Health and Nutrition Examination Survey (NHANES). Recently, Statistics Canada initiated a similar survey — the Canadian Health Measures Survey (CHMS). Comparison of US and Canadian biomonitoring data can generate hypotheses regarding human exposures from environmental media and consumer products. To ensure that such comparisons are scientifically meaningful, it is essential to first evaluate aspects of the surveys' methods that can impact comparability of data. We examined CHMS and NHANES methodologies, using bisphenol A (BPA) as a case study, to evaluate whether survey differences exist that would hinder our ability to compare chemical concentrations between countries. We explored methods associated with participant selection, urine sampling, and analytical methods. BPA intakes were also estimated to address body weight differences between countries. Differences in survey methods were identified but are unlikely to have substantial impacts on inter-survey comparisons of BPA intakes. BPA intakes for both countries are below health-based guidance values set by the US, Canada and the European Food Safety Authority. We recommend that before comparing biomonitoring data between surveys, a thorough review of methodologic aspects that might impact biomonitoring results be conducted. INTRODUCTION The National Center for Environmental Health (NCEH) at the Centers for Disease Control and Prevention (CDC) provides data on an increasing number of chemicals in blood and urine for a nationally representative sample of the US population as part of the National Health and Nutrition Examination Survey (NHANES). Recently, Statistics Canada initiated a similar nationally representative survey called the Canadian Health Measures Survey (CHMS) and released data from its first collection cycle (cycle 1, 2007 --2009). 1,2 Comparisons of population exposures across countries can be highly informative and can generate hypotheses regarding differences and similarities in exposures from various sources such as air, water, soil, food, and consumer products. To ensure that such comparisons are scientifically meaningful, it is essential to evaluate aspects of the surveys' methods that have the potential to impact data comparability. These include differences in urine and blood collection and handling, analytical approaches, and data analysis as well as differences in population characteristics. For this paper, we analyzed spot urinary bisphenol A (BPA) data from NHANES and CHMS to highlight important methodological issues that should be reviewed before comparing population exposures using these data sets. This is the first time that nationally representative data for Canada have been released, permitting comparison with US population-based data. BPA was selected because it is the subject of scientific and regulatory interest in both countries, there are sufficient data to assess intakes based on urinary concentrations, and there may be sufficient data to begin to examine the effect of temporal changes on our ability to compare intakes between surveys. To determine whether population estimates differ between surveys, we first describe spot urinary BPA concentrations for Canada (CHMS 2007(CHMS --2009 and the US (NHANES 2007(NHANES --2008 and estimate intakes (as dose in units of nanograms per kilogram-day) based on those data. We then explore the possible reasons for the differences between the US and Canadian estimates by examining the survey methodologies. We focus on population characteristics, procedures related to collection and handling of urine samples, analytical methods, and data reporting. As both CHMS and NHANES will continue to generate population-based biomonitoring data for many chemicals, this review is timely and important for researchers seeking to compare data across countries. Estimation of Daily BPA Intakes The method for estimating daily intake for individuals with spot urinary BPA data from NHANES has been described previously 3,4 ; a similar method was used to estimate daily intakes for the 2007 --2008 NHANES (http:// www.cdc.gov/nchs/nhanes/nhanes2007-2008/lab07_08.htm) and the 2007 --2009 CHMS databases. For each urinary BPA value, the concentration (ng/ml) was combined with an estimated 24-h urinary output volume (ml) to estimate daily BPA excretion (ng/day), which is assumed to be the same as the daily intake. The daily intake for each individual was divided by that individual's body weight to give daily intake adjusted for body weight (ng/kg-day) (Eq. 1). As body weights for the United States are generally higher than those for Canadians, 5 intake, which includes an adjustment for body weight, is a more informative measure of comparative exposure than urinary BPA; intakes also allow for comparison with healthbased guidance values such as reference doses (RfDs) and tolerable daily intakes (TDIs). Urinary BPA ðng=mlÞÂurinary output ðml=dayÞ=body weight ðkgÞ ¼ ng BPA=kg-day ð1Þ Since 24-h urinary output data were not collected as part of NHANES or CHMS, generic values (given in, 3,4 ) describing typical urinary output based on age and gender were used to estimate total daily BPA excretion in nanograms. The urinary BPA data were used to represent daily intake because excretion of BPA (parent and metabolite) into urine is essentially complete in 24 h. 6 Volume-based urinary BPA data (ng/ml urine), rather than creatinine-adjusted BPA data, were used (the rationale is described in 3,4 ). Various researchers have explored the issue of intra-individual variability in urinary BPA measures and while this introduces uncertainty in population estimates of BPA intake, Ye et al. 7 concluded ''ywhen the population investigated is sufficiently large and samples are randomly collected relative to meal ingestion times and bladder emptying times, the single spot-sampling approach may adequately reflect the average exposure of the population to BPA.'' Further, Mahalingaiah et al. 8 stated that ''despite within-person variability in urinary BPA concentrations, a single sample is predictive of long-term exposure (over weeks and months).'' LaKind and Naiman 3 concluded that NHANES cross-sectional data provide a reasonable reference range for single day exposures and for estimating average population exposures (and therefore intakes). Distributions of intakes representative of the US and Canadian populations were determined for all participants 6 --79 years, by gender and age groups (6 --11, 12 --19, 20 --39, 40 --59, and 60 --79 years). This method differs slightly from past methodology: NHANES includes individuals over the age of 79, and previous estimates of intakes included these individuals. Since the age cutoff for CHMS is 79 years, the NHANES database was truncated at 79 years for consistency. The total numbers of respondents with all necessary data for estimating daily BPA intake were 2467 for NHANES and 5472 for CHMS. Calculations of point estimates and confidence intervals (CIs) for the geometric mean intakes and intake percentiles for the US population were carried out in the R platform, 9 using the R survey package. 10 To estimate various population (and population subgroup) intake quantities such as means or percentiles, weighted means and percentiles were calculated using NHANES 2007 --2008 2-year weights provided by CDC. 11 CIs for percentiles were calculated using the survey package's implementation of Woodruff's 12 method. Analysis of CHMS data was performed using SAS version 9.2 (SAS Institute, Cary, NC, USA, 2003) and SUDAAN version 10 (RTI International, Research Triangle Park, NC, USA, 2008). Geometric means, selected percentiles and corresponding CIs overall and by population subgroups were calculated using the bootstrap technique; the degrees of freedom were specified in the software as degrees of freedom ¼ 11 to account for the complex survey design. 2 A comparison of the statistical methods used for NHANES and CHMS data was conducted; the two methods yielded the same estimated intakes. For measures below the limit of detection (LOD), CHMS assigns a value of LOD/2; NHANES assigns a value of LOD/sq rt 2. For consistency between the US and Canada, we elected to assign a value of LOD/2 for measurements below the LOD in both data sets. However, we also evaluated the impact on urinary BPA geometric means with using LOD/2 (CHMS) or LOD/sq rt 2 (NHANES). Survey Methodology Comparison Differences in the NHANES and CHMS methods related to population characteristics, urine collection, analytical methods, and data reporting were evaluated using information from the literature, statistical investigations, and inter-laboratory comparisons. Survey method information was derived from NHANES documentation (http://www.cdc.gov/nchs/nhanes/nhanes 2007 --2008/datadoc_changes_0708.htm) and from CHMS reports. 2,13,14 As a result of the short physiologic clearance time for BPA (half-life of o2 h; 15 ), if the preponderance of BPA exposure is via the diet, then a longer fasting time should correlate with lower urinary BPA levels. Thus, systematic differences in either fasting times or adherence rates between surveys could hinder the comparison of results. To test this, a correlation test between fasting time and log urinary BPA was conducted by age groups. To explore the possibility of analytical bias between CDC and INSPQ (the Institut Nationale de Santé Publique du Qué bec; the laboratory that analyzed samples as part of the CHMS), data from two sets of proficiency testing materials (PTMs) were evaluated. Both laboratories assessed PTMs for urinary BPA from the Arctic Monitoring and Assessment Program INSPQ ring test and the German External Quality Assessment Scheme for Analyses in Biological Materials (G-EQUAS). In each of these programs, registered participants received a set of two to three samples twice a year. Each laboratory returned its analytical results to the program and received a report showing the reliability of its results. Although the proficiency testing took place during 2010 --2011, after NHANES 2007 --2008 and CHMS 2007 --2009 were completed, these data were used as there are no data from actual urine samples with which to assess analytical bias. Comparison of Urinary BPA and Intakes in the US and Canada Urinary BPA levels in Canada were statistically significantly lower than in the US for all age/gender groupings (Tables 1 and 2). In general, body weights for Canadians are lower than in the United States (Table 3), so for equivalent urinary BPA levels for US and Canadian individuals, intakes for Canadians would be higher. Despite the lower Canadian body weights, BPA intakes are statistically significantly lower in Canada as compared with the US for all age/gender groupings (Tables 4 and 5). Comparison of CHMS and NHANES Methods The following methodological aspects and their potential impact on spot urinary BPA levels are described below: urine collection and population characteristics (Table 6), analytical procedures (Table 7), and data reporting. Urine Collection and Population Characteristics Urine collection. The CHMS performed field blank testing with distilled water at all sites to account for baseline contamination from the site environment, collection materials, and transport method. The field blank procedures mimicked all the procedures of the survey samples including the urine collection, handling, storage, shipping, and analysis. After adjustment for reagent blanks, field blank BPA data indicated no BPA contamination. 14 NHANES does not use field blanks but rather tests all new collection materials to ensure no contamination exists. CHMS requested midstream urine samples; NHANES did not specify portion of urine stream to be sampled. Collection timeframe. Although CHMS and NHANES samples used in this study were not collected over identical time periods (Table 6), there was only a 2-month shift in sampling times between the two surveys, which is not expected to influence overall results. Population sampled. In NHANES, urinary BPA data were available for ages 6 years and older (no upper age cutoff), while in CHMS the age range was 6 to 79 years. For the CHMS sample with urinary BPA data, 81.2% of the respondents were white and o3.9% were black while for the NHANES 40% of the participants were white and 24% were black (blacks were oversampled and sample weights applied to produce an unbiased national estimate) (http://www.cdc.gov/nchs/tutorials/nhanes/SurveyDesign/ SampleDesign/Info1.htm). In the United States, blacks had higher urinary BPA levels (geometric mean of 2.6 ng/ml, 95% CIs: 2.4, 2.9) compared with whites (geometric mean of 2.1 ng/ml, 95% CIs: 1.9, 2.3). Owing to the small sample size, the urinary BPA levels of black Canadians could not be assessed. The median urinary BPA (95% CIs) for whites in the United States was 2.1 ng/ml (1.8, 2.3), which was essentially the same as the median for the overall US population. For whites in Canada, the median urinary BPA (95% CIs) was 1.3 ng/ml (1.2 --1.5), also the same as for the overall population. Fasting. For NHANES, respondents aged 12 years and older were instructed to fast for 9 h (but not 416 h) before their morning appointment. Fasting was not required for respondents o12 years of age and for those with afternoon or evening appointments (http://www.cdc.gov/nchs/data/nhanes/nhanes_07_08/ HouseholdInterviewer_07.pdf). For CHMS, respondents were instructed to fast for 12 h before the morning appointment or for 2 h before an afternoon or evening appointment. 13 Information on hours since food/drink was consumed before the appointment was recorded for each respondent. In this study, overall, no association between fasting time and urinary BPA level was found for either survey. For NHANES, younger respondents (6-to 19-year olds) had shorter fasting times (median of 2 h for 6-to 11-year olds and 4 h for 12-to 19-year olds) compared with older participants (median of 5 h for 20-to 79-year olds) and had higher urinary BPA levels ( Table 2). Analytical Procedures Major differences between analytical procedures in the US and Canada include the use of liquid versus gas chromatographic techniques and the use of enzymes during the hydrolysis stage with different glucuronidase and sulfatase activities ( Table 7). The use of different analytical methods is less important than whether those methods produce comparable results 18 ; both INSPQ and CDC use an internal calibration standard ( 13 C-labeled BPA) to compensate for some of the analytical differences. However, utilization of enzymes with different efficiencies cannot be completely compensated for by internal standardization and could generate bias in the overall results. Seven samples from the two proficiency programs were analyzed by both laboratories (Table 8). Individual laboratory data are not provided here in order to protect confidentiality of the participating laboratories' identities in these proficiency testing programs. For CDC and INSPQ, only their mean urinary BPA concentration from the seven PTMs is compared with a consensus concentration; the consensus concentration is defined as the mean of the seven median values from all reporting laboratories ( Table 8). The results are as follows (CDC value was provided by the NCEH laboratory for this comparison): The INSPQ mean concentration is 14% lower than that for CDC. As compared with the consensus value, INSPQ and CDC results are 4% and 22% higher, respectively. Considering the differences in analytical methodology and the complicated nature of sample preparation, these differences are minor. The proficiency testing was conducted using samples with a urinary BPA concentration of approximately 5 mg/l. It is not known how or whether the percent difference observed between laboratories would change if the PTM concentrations were closer to the LOD. Data Reporting For urinary BPA, 151 respondents for NHANES and 507 respondents for CHMS had values oLOD (6.1% and 9.3%, respectively). The difference in the method of reporting values oLOD did not substantially affect urinary BPA concentrations (e.g., for NHANES, considering all participants, there was no difference in the geometric mean value for urinary BPA using LOD/2 or LOD/sq rt 2 (2.1 ng/ml, CIs:1.9, 2.3)). Both the US and Canada set the LOD at 3 Â SD and limit of quantitation (LOQ) at 10 Â SD of replicate analyses (5 or 10) of samples with concentrations near the LOQ. Measurements with concentrations between the LOD and the LOQ are reported as above the LOD in the final data sets in both surveys. Both laboratories performed reagent blank checks but only INSPQ found slightly higher results than LOD that had to be subtracted from reported data. An assessment of whether this adjustment could lead to a bias found a minimal impact on overall results and no negative concentrations after blank subtraction indicating no over-correction that could produce lower results. DISCUSSION Biomonitoring data for several chemicals are now available for two neighboring countries ---the US and Canada ---providing researchers with the opportunity to compare chemical concentrations and develop hypotheses regarding exposures. Before these types of comparative assessments are conducted, researchers should be aware that the CHMS and NHANES are not identical; similarities and differences in the surveys should be assessed before performing comparisons. We used BPA as a case study to explore factors that might bias comparisons between surveys. Specifically, we focused on urine collection methods, population characteristics, analytical procedures, and data reporting methods. We discuss each of these here. Urine Collection and Population Characteristics Urine collection. One difference that could impact comparability of results is the use of field blanks by CHMS but not by NHANES. BPA in the body is rapidly metabolized by the liver to conjugated BPA and is excreted as essentially completely conjugated BPA. To analyze urinary BPA, the BPA conjugates are converted back to free, or parent, BPA via digestion with enzymes. Thus, laboratory measurements are of free BPA and cannot distinguish between environmental BPA, which is in the free form, and physiologic BPA, also in the free form after digestion of the urine. Without field blanks, it is not possible to determine whether sample contamination from environmental BPA sources or collection and storage devices impact urine sample measurements. 19 In the CHMS, field blanks were used to quantify contamination; however, no comparable field blank data are available for the NHANES program. One possible source of contamination of urine samples is dust, which was detected, for example, in 95% of dust samples in the United States, albeit at very low levels (o0.5 to 10,200 ng/g; mean 843 ng/g; median 422 ng/g). 20 To date, there is no evidence that environmental levels are high enough to substantially impact overall levels in urine, although this could conceivably be a source of bias in studies of low levels of parent BPA in serum. 19 Population characteristics. Certain aspects of the sample populations in the CHMS and NHANES could impact comparison of biomonitoring results. First, the upper age cutoff differs for the two surveys. NHANES includes individuals ages 6 years and older, while CHMS has an upper age cutoff of 79 years of age. For urinary BPA measurements, this is unlikely to hinder inter-survey comparisons; limiting the NHANES population to ages 6 to 79 years resulted in the removal from the database of only 134 individuals and did not change the geometric mean urinary BPA concentration for the overall population. However, for other chemicals, attention should be paid to the effect of different agebased exclusion criteria. The effect of exclusion should be tested on case-by-case basis and is likely especially important for chemicals with age-dependent biomonitoring data results such as lipophilic, bioaccumulative compounds. 21 Second, body weights for the Canadian population are lower than those for the United States, which for equivalent urinary BPA levels would produce higher intake estimates. Third, researchers need to recognize differences in race and ethnicity between the populations in the US and Canada. The Canadian population has a far smaller proportion of blacks than whites compared with the US population. Given that urinary BPA levels in blacks were statistically significantly higher than for whites, this inter-country difference in population make-up raises potentially important questions for urinary BPA comparisons ---as well as comparison of other biomonitored chemicals ---between countries. For example, how does the difference in racial make-up of the two populations affect distributions of concentrations in the overall populations? In addition, is the observed difference in urinary BPA concentrations between races due to factors related to diet and/or other lifestyle variables, or is it due to differences in the way whites and blacks metabolize certain chemicals? Racial variations in enzymes that catalyze the sulfate conjugation of drugs, other xenobiotics, neurotransmitters, and hormones have been observed. 22 Given the difference in racial/ethnic make-up of the CHMS and NHANES populations, exploration of this topic is warranted. Fasting times. As a result of the short physiologic half-life of BPA and the assumption that the preponderance of BPA exposure is via diet, the duration of fasting before urine sample collection could have a substantial impact on measured urinary BPA levels. Systematic differences in fasting times between countries could therefore hinder the ability to directly compare urinary BPA data from the CHMS and NHANES. We examined this by comparing NHANES and CHMS fasting times to log urinary BPA concentrations and found essentially no correlation. A lack of correlation between fasting time and urinary BPA was also reported by Braun et al. 23 Several factors could contribute to this result. First, nondietary sources may contribute to BPA exposure. Rudel et al. 24 found that dietary changes to reduce exposure to food packaging resulted in a 66% decrease in urinary BPA levels, suggesting that diet contributes to some, but possibly not all BPA exposure. Alternatively, it is possible that incorrect participant reporting of fasting times could contribute to the observed lack of correlation between fasting time and urinary BPA concentration. To address this, improved data on fasting adherence and reporting in future studies are needed. The impact of variations in ---and adherence to ---fasting time in NHANES and CHMS should be examined on a chemical-by-chemical basis when comparing biomonitoring data across countries. Finally, there are most certainly differences in participant exposures to BPA in terms of both timing and amount, which hinder our ability to evaluate the effect of fasting on urinary BPA concentrations (e.g., a large intake coupled with a long fasting time might yield a similar urinary BPA level as a small intake coupled with a short fasting time); the importance of this issue cannot be quantified without additional research. Analytical Methods Results from the proficiency testing revealed that CDC's results were approximately 14% higher, on average, than those for INSPQ. It is important to note that this difference is based on a small number (N ¼ 7) of samples and that individual data for the samples are not available to further assess consistency. However, Table 5. BPA intakes (ng/kg-day) by age group for Canada (CHMS 2007(CHMS --2009 the proficiency testing process raised an interesting question related to the efficiency of the enzymes used to deconjugate the conjugated BPA in urine. CDC and INSPQ utilize different enzymes for deconjugating BPA conjugates; the enzyme used by CDC may be more efficient at breaking the sulfonate conjugate. Preliminary investigations conducted by INSPQ using unspiked urine samples to compare enzyme efficiency revealed that CDC's enzymatic conditions produce concentrations that are 10% higher than INSPQ's concentrations (unpublished data), suggesting that enzyme efficiency is the major contributor to the analytical bias. In general, inter-laboratory proficiency testing is an important step that should be conducted for any chemical for which intersurvey comparisons are attempted. However, for BPA, it is noted that although there is a slight positive bias, on average, between results produced by the CDC compared with INSPQ, and recognizing the limitations of the available inter-laboratory proficiency testing data, analytical method bias does not appear to fully explain the difference in the results of urinary BPA concentrations between the American and Canadian populations. Data Reporting Two data reporting issues were identified that should be considered when comparing CHMS and NHANES biomonitoring data. The first relates to differences in limits of detection and the method for reporting measures below the LOD. For urinary BPA, although the LODs and the method for reporting measures below the LOD differ, there were too few non-detects for this to substantially impact overall urinary BPA levels. For chemicals with high frequencies of non-detects (e.g., dioxins), consideration of this difference in data reporting will be more important. 25 The second issue concerns temporal variability in chemical concentration and frequency of reporting. There are four timeframes in the United States for which there are adult population No fasting required for those under the age of 12 years or for those with afternoon or evening appointments. Nine hours for those 12 years and over with morning appointments. a Sample size for estimates of intake is smaller than the total number of samples analyzed for BPA because data needed to estimate intake (e.g., body weight) were not available for all participants. data from NHANES on urinary BPA (Figure 1). Examination of the temporal trend for urinary BPA in the United States reveals variability that is not likely due to overall changes in exposure as there is no evidence of major changes in use of BPA over this time period 4 nor is the variability because of changes in analytical methodology (Calafat AM, personal communication). Research on temporal variability in individual urinary BPA levels 7 suggests the possibility that the short physiologic half-life of BPA coupled with variations in day-to-day individual exposure could be responsible for some of the variability observed in Figure 1. Additional years' worth of data may be required to observe actual trends in urinary BPA levels. 26 For chemicals with long half-lives, less extensive temporal data reporting may be required for intersurvey comparisons. Two relationships have remained relatively consistent in the US and Canada: the overall relationships between gender and age and urinary BPA. In all data sets except one, males had higher urinary BPA levels than females (in NHANES 2007 --2008, median levels were the same for males and females) ( Figure 2). In general, younger people have higher urinary BPA levels than older people, although the relationship between the 6-to 11-year age group and the 12-to 19-year age group has fluctuated depending on the timeframe examined ( Figure 2). 3,4 CONCLUSIONS We identified several dissimilar methodologic aspects of the NHANES and CHMS. The differences assessed in this study appear to have minimal impact on the interpretation of comparative urinary spot BPA measures and BPA intake from the two surveys. An earlier review of methodologic differences in measurements of dioxins in breast milk highlighted the importance of evaluating study design before comparing data sets from different research groups. 27 We recommend that before developing hypotheses regarding comparisons of biomonitoring data between surveys from different countries, a thorough review of methodologic aspects that might impact biomonitoring results be conducted. We further recognize the ongoing controversy regarding the interpretation of studies of toxicity of BPA but note that healthbased guidelines are available with which to compare populationbased BPA intakes estimated from the CHMS and NHANES data. A TDI of 50 mg/kg-day (50,000 ng/kg-day) has been established by the European Food Safety Authority 28 ; the same value is used by the US Environmental Protection Agency as an RfD. 29 A provisional TDI of 25 mg/kg bw-day (25,000 ng/kg-day) was established by Health Canada. 30 Based on the intakes estimated for US and Canadian populations (Tables 4 and 5), regardless of age or gender, all intakes are well below the health-based guidance values set by the US, Canada, and the European Food Safety Authority. For example, the 90 th % intakes for 12-to 19-year olds, 26 (no confidence intervals were given; ages 18 and older). For the remaining years, data are from the online Centers for Disease Control and Prevention data tables for urinary BPA in adults 20 years and older (limit of detection (LOD)/sq rt 2) (http://www.cdc.gov/ exposurereport/data_tables/URXBPH_DataTables.html) and from Canadian Health Measures Survey data for adults 20 years and older (truncated at age 79 years and using LOD/2). which had the highest intakes for any age or gender breakdown in both countries, were more than two orders of magnitude below the TDI for Canada and the RfD for the US. CONFLICT OF INTEREST Dr. LaKind was supported by the Polycarbonate/BPA Global Group of the American Chemistry Council. Dr. Lakind consults to both government & industry.
2014-10-01T00:00:00.000Z
2012-02-15T00:00:00.000
{ "year": 2012, "sha1": "e5865f9b18e337b7f5eecd540a1820aaa358a270", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/jes20121.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e5865f9b18e337b7f5eecd540a1820aaa358a270", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
259897314
pes2o/s2orc
v3-fos-license
Predicting Critical Care Nurses’ Intention to Use Physical Restraints in Intubated Patients: A Structural Equation Model , Introduction Critically ill patients often rely on a series of life support equipment and invasive treatments (e.g., endotracheal intubation and central venous catheterization) throughout the intensive care unit (ICU) stay, which may result in agitation, pain, and delirium [1]. Unfortunately, these disturbing symptoms could lead to adverse events including selfextubation and medical device removal [2,3], compromising patient safety seriously. Physical restraint (PR) is commonly perceived as a routine solution to avoid self-extubation empirically [2,4]. PR is defned as "any action or procedure that prevents a person's free body movement to a position of choice and/ or normal access to his/her body by the use of any method, attached or adjacent to a person's body that he/she cannot control or remove easily" [5]. According to a prospective observational study, mechanical ventilation was an independent risk factor for PR use [6]. Compared to nonintubated patients, PR use is more pervasive (35.8% in Jodan [7] and 75% in Japan [8]) among mechanically ventilated patients in critical care settings [2,6,9]. However, there is growing evidence identifying the association between PR and deleterious efects both physical and psychological [10][11][12]. Besides, PR itself is regarded as a violation of autonomy and dignity [13,14]. Furthermore, it remains to be a controversial issue whether PR could ensure patients' safety efectively, and previous studies [12,15,16] have indicated that PR may exacerbate unplanned extubation and medical device removal conversely. Due to the abovementioned potential hazards, how to facilitate minimizing programs of PR has become a global issue, and PR reduction has been advocated by many international institutions including the Registered Nurses' Association of Ontario (RNAO), the American Nurses Association (ANA), and the British Association of Critical Care Nurses (BACCN). As we know that critical care nurses are the main decision-makers in PR practice, so a profound understanding of nurses' intention to use PR is an essential prerequisite in the development of PR reduction. Previous studies have revealed the process and nurses' experience of decision-making of PR. Shen et al. [17] proposed a four-stage process of PR including perceptions of risks, hesitation, implementation, and refection. Te safety of patients and staf is seen as the core element in the process. In addition, it is conventionally believed that the application of PR is an infallible guarantee of security. Patient safety was being prioritized in the clinical context but at the expense of ignoring human rights including autonomy and dignity [18]. Despite the existence of numerous studies concerning PR practices, most of them are focused on describing the experience of nurses, knowledge, attitude, and practice of PR, lacking a theoretical framework to analyze the intention of PR. Te theory of planned behavior (TPB), developed by Ajzen [19], is a widely recognized psychological framework applied in various felds, including healthcare. In the context of critical care, the TPB has been previously used in several studies to understand and predict human behaviors related to healthcare practices. For instance, O'Boyle et al. [20] used the TPB-based theoretical model to explain the self-reported and observed handwashing behavior of critical care and postcritical care nurses. Another study conducted by Tanguay et al. [21] aimed to examine the factors that infuence nurses' intentions to practice oral care with intubated clients in intensive care settings using the TPB. Tese research contexts indicate that the TPB can assist in enhancing our understanding of the decision-making processes within the felds of intensive care and ofer guidance on how to improve their practical behaviors. Besides, the rationale for applying Ajzen's TPB specifcally in critical care settings lies in its ability to capture the complexity of nurses' intention formation processes from three dimensions, including attitudes, subjective norms, and perceived behavioral control. Critical care settings are characterized by high-stress levels, time constraints, and life-threatening situations where quick decision-making is required. In such circumstances, understanding the determinants of nurses' intentions towards PR becomes crucial as it directly impacts patient safety and quality of care. By taking into account these various dimensions of intention formation, the TPB model provides a new perspective in understanding the decision-making process of PR and patient care management through the lens of critical care nurses. It states that an individual's behavior is determined by his or her intention, and the behavioral intention is determined by three main dimensions: (1) attitude (the extent to which an individual has a favorable or unfavorable evaluation of the behavior), (2) subjective norm (SN perceived social pressure to perform or not to perform the behavior), and (3) perceived control of the behavior (PBC perceived ease or difculty when performing the behavior) [19]. Tese factors are highly relevant in critical care nursing, where the decision to use PR is infuenced by a complex interplay of individual beliefs, professional guidelines, and organizational culture. Considering these factors together makes it possible to predict the likelihood of an individual performing a specifc behavior. And Via-Clavero et al. [22] developed the Physical Restraint Teory of Planned Behavior (PR-TPB) based on the TPB. Besides, from critical care nurses' perspectives, ethical confict is regarded as an undeniable difculty when practicing PR because the confict between maintaining patients' safety and violating patients' autonomy and dignity often places nurses in awkward predicaments [23,24]. Tough critical care nurses have realized the adverse efects of PR, they have no choice but to use it to ensure patient safety. Tus, we aimed to investigate the efects of TPB constructs (attitude, SN, and PBC) and ethical confict on physical restraint intention in this study. Te research question of this study is as follows: to what extent can the TPB constructs and ethical confict predict ICU nurses' intention to use physical restraint in intubated patients? What is new in our study is that ethical confict was introduced as a predictor of PBC because ethical confict is regarded as difculty in the PR decision process. In previous studies, we found that ethical dilemma, an essential factor infuencing PR practice and nurses, have refected on the experiences of ethical dilemmas due to violations of nonmalefcence and benefcence [14]. Tus, the proposed framework is illustrated in Figure 1. Design. A cross-sectional survey was conducted among critical care nurses in China from February to March 2022. In this study, structural equation modeling (SEM) was applied to establish models to predict critical care nurses' intention to use PR in intubated patients. Integrating the conceptual framework of the Teory of Planned Behavior and ethical confict, the hypothetical model is shown in Figure 1. Sample and Setting. Te typical method of SEM sample size is based on the general rule of 10 : 10 observations per indicator [25][26][27]. In this study, it was calculated by the following equation: (4 + 2 + 3 + 3 + 19) * 10 � 310 while according to Hair [28], the minimum acceptable sample size should be 300 for a model with seven or fewer constructs and factor loadings larger than or equal to 0.45. To obtain more statistically robust results, the target sample size was selected as 310 after considering both two calculation approaches. And a total of 313 critical care nurses were included in this study ultimately. Participants were recruited from critical care units covering Yunnan, Chongqing, Zhejiang, and Guangdong provinces in China via convenience sampling. Te inclusion criteria of participants were (1) registered nurses who worked in intensive care units and (2) voluntary participation and informed consent to this survey. And the intern nurses were excluded. Data Collection. Data were collected using a self-report questionnaire composed of three parts: (1) the Physical Restraint Teory of Planned Behavior (PR-TPB), (2) the ethical confict in nursing questionnaire-critical care version (ECNQ-CCV), and (3) demographic information form. Te data collection included two stages, Stage 1: three critical care nurses who were not involved in this study were invited to fll out the pretested questionnaires to eliminate any ambiguous or incomprehensible expressions and estimate the required time. Based on the pretest feedback, some inappropriate expressions were revised and signifcant words were marked. Stage 2: the researchers contacted and explained the aim of the study to the head nurses of each department and sent the revised questionnaires to the participants via an online web-based platform (https://www.wjx.cn/vm/h7e0uOF.aspx). With the assistance of the head nurses, the eligible participants were identifed and organized to fll out the questionnaires. At the same time, participants were provided with a phone number to ask any questions about the study. And all of the participants were instructed to complete the questionnaire voluntarily and anonymously. To ensure the quality of returned questionnaires, the questionnaires were set as follows: (1) the questionnaire began with a concise introduction about the purpose of the study and the notes for items needing more attention were in bold and marked in red. (2) To avoid missing responses, all items were set as required questions in the submitting process and the platform will send alarms automatically if there are any missing questions. (3) To prevent repeated participation, each IP address was limited to flling out the questionnaire once only. A total of 441 questionnaires were distributed, and 316 were returned (response rate: 71.7%). In addition, 3 questionnaires with response time less than 3 minutes were excluded, and 313 valid questionnaires were selected ultimately (n � 313). Te Physical Restraint Teory of Planned Behavior (PR-TPB). Te PR-TPB [22] consists of the following 4 subscales: (1) attitude, (2) subjective norm, (3) perceived behavioral control, and (4) intention. All the answering formats were 7-point Likert scales ranging from 1 to 7. In this study, attitude, subjective norm, and perceived behavioral control were measured by corresponding subscales, respectively. Te attitude was measured using 4 items with opposite adjectives (unsafe/safe, unnecessary/necessary, harmful/benefcial, and unacceptable/acceptable) placed on the poles of a 7-point Likert scale. Te total score ranges from 4 to 28. Subjective norm was measured by 2 items describing the social pressure by the individual perceived from the working team when performing physical restraint. Te score of each item is rated from 1 (strongly disagree) to 7 (strongly agree). Perceived behavioral control was measured with 3 items refecting self-efcacy and controllability toward applying physical restraint in intubated patients. Participants rated each item from 1 (strongly disagree) to 7 (strongly agree). Te intention was evaluated by 3 scenarios in ICU settings, rated from 1 (in no case) to 7 (in all cases). Ethical Confict in Nursing Questionnaire-Critical Care Version (ECNQ-CCV). ECNQ-CCV [29] includes 19 scenarios that may produce ethical confict among critical care nurses, and each scenario contains three questions to measure ethical confict: "frequency," "degree of intensity," and "type." Frequency is measured with a 6-point Likert scale ranging from 0 (never) to 5 (at least once a week). Te degree of intensity is measured with a 5-point Likert scale ranging from 1 (no problem at all) to 5 (highly problematic) and the type of ethical confict is measured by six categories. Journal of Nursing Management In the current study, ethical confict was measured by the index of exposure to ethical confict (IEEC). IEEC was a concept developed to refect levels of exposure to ethical confict, which multiplies the frequency and degree of intensity of each scenario with a range of 0 to 25. Te total score of IEEC ranges from 0 to 475, with a higher score indicating higher levels of ethical confict. Te instrument was tested for validity and reliability among 205 critical care nurses in Spain, which reported Cronbach's α of 0.882. Te Chinese version of ECNQ-CCV has been validated and found to have good reliability (Cronbach's α � 0.902 and McDonald's ω 0.903) and validity [30]. Cronbach's α was 0.923 in this study. Demographic Information Form. Te demographic information form included 7 questions about gender, age, work year, job title, education, training in physical restraint, and training in ethics. Ethical Considerations. Ethical approval was obtained from the Second Afliated Hospital Zhejiang University School of Medicine (SAHZU, no. 2020131). All the participations in this survey were voluntary and anonymous, and a completed questionnaire was recognized as informed consent. Participants were informed about the authorship and purpose of the research and were told that all data would remain anonymous and confdential. 2.6. Data Analysis. IBM SPSS Statistics version 25 software and IBM SPSS AMOS version 24 software were used in the analysis of the study. Categorical variables were described by frequency and percentage, and continuous variables were described by using means and standard deviations. Te structural equation model was composed of two major elements: the measurement model and the structural model. Step one: confrmatory factor analysis (CFA) was performed to assess the reliability of the measurement model. And the correlations of constructs were calculated using Pearson's correlation coefcient. Step two: A structural model was constructed, and the model ftness was measured by the chi-square test/degree of freedom (χ 2 /df) <3, the comparative ft index (CFI) >0.9, the goodness of ft index (GFI) >0.9, the normed ft index (NFI) >0.9, and the root mean squared error of approximation (RMSEA) <0.08 [31,32]. Te maximum likelihood (ML) estimation was used as the parameter estimation method to fnd the best-ftting model because skewness and kurtosis of the involved variables were within the acceptable range (absolute value of skewness <3 and absolute value of kurtosis <10), satisfying the assumption of normality [33]. And direct efects and indirect efects of the constructs were calculated by bootstrap estimates. Besides, a two-sidedp value of 0.05 was set for statistical signifcance. Sample Characteristics. 313 critical care nurses participated in the survey, and the characteristics of participants are listed in Table 1. Te mean age and work year of the participants were 30.44 (SD � 6.21) and 7.42 (SD � 6.00) years, respectively; 13.7% were male and 86.3% were female. In the aspect of job title, 74.5% held a junior title, 23.6% with an intermediate title, and 1.9% with a senior title. 82.7% of participants had a bachelor's degree. Over 70% of the participants had received training in physical restraint and nursing ethics. Structural Equation Modeling 3.2.1. Measurement Model. In the process of confrmatory factor analysis, reliability was assessed for the measurement model. Table 2 provides an overview of the factor loadings and composite reliability of constructs. Te reliability of the measurement model was evaluated by factor loadings, composite reliability (CR), and Cronbach's alpha. Te factor loadings of items varied from 0.46 to 0.84, meeting the threshold of 0.45 [34]. Te values of composite reliability were above 0.5 (the criteria of CR [35]), suggesting stable composite reliability. Cronbach's alpha for all items was 0.74, which was higher than the 0.7 threshold. Te preceding data confrmed the measurement model's acceptable reliability. Table 3 shows the correlations, the mean score, and the standard deviation of each construct. Attitude (r � 0.26, p < 0.01), SN (r � 0.27, p < 0.01), and PBC (r � 0.29, p < 0.01) were positively associated with intention. IEEC was positively associated with SN (r � 0.12, p < 0.05) and PBC (r � 0.13, p < 0.05). And the goodness of ft index of the measurement model was χ 2 /df � 1.34 (<3), RMSEA � 0.03 (<0.08), GFI � 0.97 (>0.90), CFI � 0.98 (>0.90), and AGFI � 0.94 (>0.90). All the goodness of ft indexes indicated a satisfactory model. Figure 2. Te model was assessed by the following goodness of ft indices (χ 2 /df � 2.57, RMSEA � 0.07, GFI � 0.94, and AGFI � 0.90); these indexes indicated a satisfactory model ft. Te standardized direct and indirect path coefcients of the model are presented in Table 4. Te results revealed the fact that attitude (β � 0.29, p < 0.05), subjective norm (β � 0.25, p < 0.05), and perceived behavioral control (β � 0.32, p < 0.001) had a direct efect on the intention to apply PR in intubated patients. And the index of exposure to ethical confict has a direct efect on perceived behavioral control (β � 0.13, p < 0.05). At the same time, IEEC had an indirect efect on intention (β � 0.04, p < 0.05) via perceived behavioral control. All the paths were signifcant at the level of 0.05. All the variables accounted for 29% of the variance in intention to use PR in intubated patients (R 2 � 0.29). Discussion Te structural equation model revealed that ethical confict, attitude, subjective norm, and perceived behavioral control were signifcant predictors of PR intention in intubated patients. Besides, this research provides a theoretical basis and new perspectives for follow-up research in the feld of developing PR guidelines [36,37]. Ethical confict seems to be a common issue in the process of PR decisions due to the complexity of scenarios in critical care settings. As the previous study reported, nearly one-third of the nurses have confronted with ethical dilemmas in the process of physical restraints [14]. According to the results of this study, there was a positive association between ethical confict and the intention to use PR in intubated patients, revealing that when exposed to a higher level of ethical confict, ICU nurses are more likely to apply PR in intubated patients. Te phenomenon might be explained by the fowing reasons: (1) uncertainty and depressing feelings often come with ethical confict, and PRs may be a means to ease ethical confict and cope with frustrating feelings. Some nurses have noted that the application of PR provides an inner sense of security and relieves the pressure of maintaining patients' safety [23]. (2) Despite violating human rights and restrictions on bodies, the security of patients is always regarded as the priority. Nurses may rationalize the implementation of PR by informing themselves that it is inevitable for security reasons. (3) Furthermore, long-term exposure to a high level of ethical conficts may lead to a more indiferent attitude towards patients' human rights, thus turning to PR thoughtlessly. And in this study, we found out that nearly one-third of the nurses have not received ethical education. As for clinicians and policy-makers, this fnding emphasized the need to cover ethical education and continued education programs among critical care nurses. At the same time, we noticed that part of nurses may be confronted with depressing moral confict and the hospital managers need to take staf's ethical confict into consideration and provide clinical nurses a stage to release their inner emotional burden. In the future, the efective way to identify and relieve the ethical confict in the PR decision is needed to be explored. In the current study, the attitude has a positive efect on the intention to use PR in intubated patients, which means the more favorable the attitude, the stronger the intention to perform the behaviors. Te mean attitude was 25.32 of 28, approximately 90% of the total score, indicating a favorable attitude towards the application of PR in intubated patients among critical care nurses. Consistent with the previous study [2], such a positive attitude towards PR in intubated patients may be associated with the empirical belief that PR could prevent self-extubation and medical device removal regardless of its deleterious efects. Tus, from this perspective, more systematic and comprehensive education and training of PR are essential in reconstructing nurses' perceptions and attitudes concerning physical restriction. Several cross-sectional studies in Turkey [38], Jordan [39], and China [40] have also pointed out inadequate education and knowledge of PR. Unfortunately, despite being aware of the harmful efects of PR, some nurses are still likely to apply PR in intubated patients out of the responsibility to protect patients' safety or for a lack of other efective alternative methods. Tus, in this way, the concept of minimizing PR is needed to be advocated in critical care settings among all the medical staf. Subjective norm has a positive efect on the intention to apply PR in intubated patients, and the mean score was more than twice as much as in Spain [41], indicating the high level of perceived social pressure to perform PRs in China compared to Spain. A qualitative study [24] has shown that PR was regarded as a routine practice in the workplace norm, and physical restraints in intubated patients were engrained in the security culture of critical care settings. Nurses, as safeguard to critically ill patients, face the burden of responsibility and pressure from the workplace. Te practice of PR may be driven by expectations of the workplace other than the clinical guidelines and critical thinking of nurses. Consequently, how to reestablish evidence-based PR guidelines in Chinese hospitals and stimulate self-refection thinking patterns are crucial issues to be examined in the development of PR reduction. Regarding perceived behavioral control, it is the strongest positive predictor of the intention in this study. Tis fnding infers that those nurses with higher selfefcacy and controllability toward PR are more inclined in applying PR in intubated patients. In general, senior nurses should be more profcient in PR and have higher PBC scores. However, in this study, we found an interesting thing that nurses who worked for 0-4 years had higher PBC scores than those who worked for more than 12 years. Similar to a previous study, Perez et al. [24] also found that novice nurses are more likely to use PR in intubated patients compared to senior nurses, which means there is an evident gap between the self-evaluation and the true PBC. Lacking comprehensive understanding and formal education of PR, novice nurses may overestimate their controllability of PR, thus resulting in excessive PR intention in intubated patients. In addition, due to the burden of ensuring patients' safety and a lack of other alternative methods, novice nurses are compelled to use PR. Tis showed that we should focus on the novice nurses' PR education, and the critical care units could assign senior nurses to guide the novice nurses in clinical PR practice. Tis study has some limitations. First, the study was conducted among critical nurses in four provinces of China, the generalizability of our conclusion to other populations might be considered with caution. Another limitation lies in the fact that the TPB permits valid predictions solely when the behavior is entirely governed by volitional control, but some external factors may constrain nurses' ability to exercise full control over their PR use (e.g., patient acuity and availability of alternative interventions). Furthermore, it may not fully capture the dynamic and context-dependent nature of critical care nursing practice because it focuses on rational decision-making and the assumption of stable preferences. In addition, the cross-sectional study cannot refect the changing process of PR intention, and a longitudinal investigation design is needed in the future. Conclusion To our knowledge, this is the frst study to predict critical care nurses' PR intention in intubated patients using a structural equation model. Tis study revealed that ethical confict, attitude, subjective norm, and perceived behavioral control are positive predictors of PR intention in intubated patients. Implications for Nursing Management Te present fndings provide a novel theoretical standpoint for examining PR intentions within critical care environments. To mitigate PR utilization in critical care nursing, it is vital to implement a holistic approach that encompasses not only ongoing education and training on physical restraint and ethical considerations but also organizational management aspects that could afect nurses' attitudes, intentions, and actions. Tis may involve evaluating existing work resources, infrastructural conditions, occupation-related framework conditions, and disseminating the concept of PR reduction in clinical contexts. Moreover, it is essential to account for the accessibility of technologically sophisticated equipment. In addition, recognizing the potential impact of nursing management in promoting alternatives to PR, such as nonpharmacological approaches and patient-centered care strategies, is of utmost importance. By contemplating these wider contextual elements and fostering a more intricate comprehension of the factors infuencing PR application in critical care nursing, valuable insights can be gained for the development of efcacious interventions aimed at enhancing patient safety and care quality. Data Availability Te datasets used and/or analyzed during the current study are available from the corresponding authors on reasonable request. Ethical Approval Tis study has been approved by the Clinical Research Ethics Committee of Te Second Afliated Hospital Zhejiang University School of Medicine (SAHZU, no. 2020131). Te study was conducted in accordance with the Helsinki Declaration for the protection of human subjects. Disclosure Yajun Ma and Nianqi Cui are the co-frst authors. Conflicts of Interest Te authors declare that they have no conficts of interest.
2023-07-15T15:12:15.242Z
2023-07-13T00:00:00.000
{ "year": 2023, "sha1": "02f5f6a498c5a0cca1d7dedfeff6e105a07f80f0", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jonm/2023/3286312.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a94028c4aea4bbecbe0e3cb3658059c25c741fa8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
203651713
pes2o/s2orc
v3-fos-license
Naturally Acquired Antibody Response to Malaria Transmission Blocking Vaccine Candidate Pvs230 Domain 1 Plasmodium vivax malaria incidence has increased in Latin America and Asia and is responsible for nearly 74.1% of malaria cases in Latin America. Immune responses to P. vivax are less well characterized than those to P. falciparum, partly because P. vivax is more difficult to cultivate in the laboratory. While antibodies are known to play an important role in P. vivax disease control, few studies have evaluated responses to P. vivax sexual stage antigens. We collected sera or plasma samples from P. vivax-infected subjects from Brazil (n = 70) and Cambodia (n = 79) to assess antibody responses to domain 1 of the gametocyte/gamete stage protein Pvs230 (Pvs230D1M). We found that 27.1% (19/70) and 26.6% (21/79) of subjects from Brazil and Cambodia, respectively, presented with detectable antibody responses to Pvs230D1M antigen. The most frequent subclasses elicited in response to Pvs230D1M were IgG1 and IgG3. Although age did not correlate significantly with Pvs230D1M antibody levels overall, we observed significant differences between age strata. Hemoglobin concentration inversely correlated with Pvs230D1M antibody levels in Brazil, but not in Cambodia. Additionally, we analyzed the antibody response against Pfs230D1M, the P. falciparum ortholog of Pvs230D1M. We detected antibodies to Pfs230D1M in 7.2 and 16.5% of Brazilian and Cambodian P. vivax-infected subjects. Depletion of Pvs230D1M IgG did not impair the response to Pfs230D1M, suggesting pre-exposure to P. falciparum, or co-infection. We also analyzed IgG responses to sporozoite protein PvCSP (11.4 and 41.8% in Brazil and Cambodia, respectively) and to merozoite protein PvDBP-RII (67.1 and 48.1% in Brazil and Cambodia, respectively), whose titers also inversely correlated with hemoglobin concentration only in Brazil. These data establish patterns of seroreactivity to sexual stage Pvs230D1M and show similar antibody responses among P. vivax-infected subjects from regions of differing transmission intensity in Brazil and Cambodia. INTRODUCTION Malaria is a vector-borne infectious disease caused by the Plasmodium protozoan parasite. Over 200 million people suffer malaria episodes every year, primarily in tropical low-income settings, and pregnant women and children are particularly vulnerable to severe disease (1). Malaria eradication is a global priority, and an efficacious vaccine could strengthen current control efforts and enable elimination strategies. Vaccine development depends on the understanding of protective immunity, and it is fundamental to characterize immune responses to infection in a natural setting. While much research has focused on P. falciparum, the species causing most morbidity and mortality, immune responses to P. vivax infection are less well studied. In 2017, Brazil reported an increase in malaria incidence rate that contributed to 25% of malaria cases in all of Latin America, the majority of which (74.1%) were caused by P. vivax infection (1). But not only the Americas are affected by vivax malaria. Cambodia, in Asia, is particularly affected by malaria, reporting a 98% increase in clinical cases between 2016 and 2017 (1). Neither Cambodia nor Brazil are expected to meet the goal of 40% malaria reduction by 2020, thus, both countries require additional strategies to control and prevent malaria infection and transmission. Importantly, vivax malaria is a global issue (2) and an increase in the number of P. vivax cases has been recently reported in Africa (3)(4)(5)(6). Prevention tools that target the sexual stages of parasites may be critical to reduce disease incidence in locations where transmission rates are increasing. Transmission to the next vulnerable human can be halted by disrupting the development of the sexual stage parasite in the mosquito, the basis for the development of transmission-blocking vaccines (TBV) (7). Naturally acquired immunity to P. falciparum TBV candidates is well characterized (8)(9)(10) and TBVs for P. falciparum are currently in pre-clinical and clinical trials (11)(12)(13)(14). However, P. vivax TBV candidates are less advanced. To date, only Pvs25, a post-fertilization antigen present on the surface of Plasmodium zygotes and ookinetes, has been evaluated as a human vaccine targeting P. vivax sexual stages (15,16). Although Pvs25 immunization has shown promising results in mice, achieving durable anti-Pvs25 antibody responses remains challenging and no boosting effect of natural exposure is expected, thus multiple vaccinations may be required. We hypothesize that the development of a vaccine able to target a prefertilization antigen may benefit from boosting during natural infections and thereby reduce transmission more effectively. Pvs230 (the ortholog of the P. falciparum Pfs230) is a prefertilization gametocyte/gamete antigen in P. vivax parasites with a low level of polymorphism worldwide (17), making it a promising target for TBV strategies in Asia and Latin America. Studies have explored Pvs230 TBV candidacy by assessing mouse antisera raised against four domains of the Pvs230 protein (18), but prevalence of anti-Pvs230 antibodies during naturally acquired infection in humans has never been assessed. Here, we evaluated seroprevalence to the first domain of the sexual stage antigen Pvs230 (Pvs230D1M) in P. vivax-infected subjects in malaria-endemic areas of Brazil and Cambodia. Our results can inform future strategies to develop Pvs230D1M as a transmission-blocking vaccine. Figure 1) when they presented with acute P. vivax infection. Presence of P. vivax parasites was diagnosed by microscopy and absence of P. falciparum parasites was also established; gametocytes were not separately documented by microscopy and are hence not available for analyses. Sera (Brazil) or plasma (Cambodia) were frozen and transported to NIH in Rockville, USA, for further analysis. Additional information on patients from this study is presented in Table S1. METHODS Pvs230D1M, Pfs230D1M, PvDBP-RII, and PvCSP ELISA Antibody responses against Pfs230D1M, Pvs230D1M, PvDBP-RII (P. vivax Duffy Binding Protein Region II) and PvCSP (P. vivax Circumsporozoite Protein) recombinant antigens were determined by enzyme-linked immunosorbent assay (ELISA). Pfs230D1M was expressed in Pichia pastoris as previously described (19). Details for the production and purity of Pvs230D1M (Sal-1, NCBI reference sequence XP_001613020.1) and PvCSP (CSP31VK210, NCBI reference KT588189.1), which were also produced in P. pastoris, will be reported elsewhere [manuscript in preparation]. PvDBP-RII was expressed in E. coli BL-21 cells and refolded as previously described (20)(21)(22)(23). Immulon R 4HBX plates were coated with 1 µg/mL of recombinant antigens, then incubated overnight at 4 • C. Coated plates were blocked with 320 µL of buffer containing 5% skim milk in Tris-buffered saline (TBS) for 2 h at room temperature (RT), and washed four times with 1X Tween-TBS. After establishing minimum serum dilutions to detect reactivity against individual antigens in pilot studies, plasma, or serum samples (diluted 1:10 for Pvs230D1M, 1:100 for Pfs230D1M, 1:50 for PvDBP-RII, and 1:250 for PvCSP in blocking buffer) were added to antigen-coated wells in duplicate and incubated for 2 h at RT. Plates were washed and incubated with 100 µL anti-human IgG (1:2000 dilution; SeraCare: KPL) for 2 h at RT. The plates were washed and subsequently incubated in the dark for 30 min at RT with a colorimetric substrate (p-nitrophenyl phosphate; Sigma). Absorbances (405 and 650 nm) were measured using SoftMax Pro7 ELISA reader (Molecular Devices). The cut-off to define positivity was based on the average optical density (OD) from 36 non-immune serum samples from USA donors (negative controls), whose values did not differ significantly between experiments (p>0.9) and hence OD values for controls from different assays were combined for (Figures 2, 3, 7). The cut-off for positivity was calculated as the mean OD of negative controls plus 3 standard deviations. Detection of Pvs230D1M IgG Subclasses by ELISA Immulon R 4HBX plates were coated with 5 µg/mL of recombinant Pvs230D1M antigens, then incubated overnight at 4 • C. Coated plates were blocked with 320 µL of buffer containing 5% skim milk in Tris-buffered saline (TBS) for 2 h at RT, and washed four times with 1X Tween-TBS. Plasma or serum samples were diluted 1:10 in blocking buffer and were added to antigen-coated wells in duplicate and incubated for 2 h at RT. IgG subclasses were detected using the following antibodies: mouse anti-human IgG1 Fc-AP, mouse anti-human IgG2 Fc-AP, mouse anti-human IgG3 Hinge-AP, and mouse anti-human IgG4 Fc-AP from Southern Biotech for 2 h at RT. All these antibodies were diluted 1:750 in blocking buffer. The plates were washed and subsequently incubated in the dark for 30 min at RT with a colorimetric substrate (p-nitrophenyl phosphate; Sigma). Absorbances (405 nm and 650 nm) were measured using SoftMax Pro7 ELISA reader (Molecular Devices). The cut-off to define positivity was based on the average optical density (OD) from 36 non-immune sera samples from USA donors (negative controls). The cut-off for positivity was calculated as the mean OD of negative controls plus 3 standard deviations. A sample was considered positive if background-adjusted OD was above the cut-off value. Antibody Depletion and ELISA Immulon R 4HBX plates were coated with 1 µg/ml of recombinant Pfs230D1M, Pvs230D1M or PvDBP-RII respectively, incubated overnight at 4 • C, blocked for 2 h at RT, and washed. Thereafter, 100 µl of sample (diluted 1:50) were added to Pvs230D1M-coated wells and incubated for 1 h at RT. The unbound material from the Pvs230D1M coated plate was collected and transferred into another well coated with same antigen. After the third transfer, depleted antibodies were transferred to Pfs230D1M-or PvDBP-RII-coated plates and incubated for 1 h at RT, and further processing was performed as described above. IgG responses were considered crossreactive if preincubation with Pvs230D1M resulted in reduced antibody reactivity in Pfs230D1M ELISA. If preincubation with Pvs230D1M did not reduce antibody reactivity in the Pfs230D1M ELISA, the IgG response against Pfs230D1M in P. vivax-infected subjects was presumed to be due to vivax/falciparum co-infection or to pre-exposure to P. falciparum. Pvs230D1M-depleted sera were transferred to PvDBP-RII-coated plates, to confirm that the depletion or reduction found in Pvs230D1M ELISA was specific for antibodies targeting Pvs230D1M ( Figure S3). For this experiment sera from 26 USA donors were used as negative controls. Statistical Analyses Statistical analyses were performed using GraphPad Prism software (GraphPad8). Analyses were performed using data from two independent experiments. We considered serum reactivity levels above a maximum threshold of 3 standard deviations from the geometric mean for the study population to be unreliable, and hence excluded one Brazilian subject from the analyses. Correlation analyses were tested using logisticregression analysis. One-Way ANOVA followed by multiple comparisons test was employed to compare different groups, when applicable. For Figures 4, 5, Kruskal-Wallis test was performed. Significance level used was p < 0.05 for all statistical analyses. Map Design Maps were created using maptools and raster packages and plotted using ggplot2 package of the R software (http://www.rproject.org, version 3.5.3). Anti-Pvs230D1M Antibody Response Is Induced During Malaria Infection To assess humoral response to sexual stage antigens in subjects living in areas of malaria transmission, sera/plasma samples obtained from patients presenting with acute P. vivax infection diagnosed by blood smear microscopy were examined for IgG levels against Pvs230D1M. Sera from 36 healthy non-immune USA donors were used as negative controls to determine the cutoff OD value (1.18). The seroprevalence of IgG antibodies with specificity for Pvs230D1M was 27.1% in Brazil (19/70 samples) and 26.6% in Cambodia (21/79 samples) (Figure 2). IgG1 and IgG3 Are the Most Prevalent IgG Subtype Responses to Pvs230D1M To evaluate differential representation in immune response to Pvs230 among the four human IgG subtypes, we evaluated IgG1, 2, 3, and 4 responses against Pvs230D1M. Detectable IgG3 levels (19.3% in Brazil and 20.6% in Cambodia) and IgG1 levels (10.5% in Brazil and 15.1% in Cambodia) were most frequent, with limited IgG2 responses (5.3% in Brazil and 1.4% in Cambodia). The frequency of IgG4 responses was 0% at both study sites (Figure 3). Pvs230D1M IgG Response Increases With Age in Cambodian Subjects Although no direct correlation was observed between age and Pvs230D1M IgG titers (Figure 4), a cumulative effect of age in Pvs230D1M antibody response was observed in Cambodia. Seroprevalence for Pvs230D1M was higher with increasing age strata in Cambodian subjects: 1.3% Pvs230D1M IgG responders among 1-9 year-olds; 6.3% among 10-19 year-olds; and 19.0% FIGURE 3 | Immunoglobulin G subclass responses to Pvs230D1M in Brazil and Cambodia. Healthy malaria-naïve donor sera were used to define the background for each subclass, and cut-off was calculated based on mean control + 3 standard deviations. IgG3 and IgG1 responses were predominant with limited IgG2 and no IgG4 response. One-Way ANOVA followed by multiple comparisons was used for this analysis and results are displayed as mean ± SD. among 20 years old and above (1-9 years group vs. 20 years and older, p = 0.021, Figure 4). Pvs230D1M titers were not evaluated for correlation with age in Brazil since the median age was 45 years old and no samples were obtained from children. Anti-Pvs230D1M IgG Titers Do Not Correlate With Parasitemia Levels We assessed the correlation of anti-Pvs230D1M specific antibodies with parasitemia levels. There was no significant association between Pvs230D1M IgG response and asexual parasitemia in subjects from Brazil (p = 0.38) or Cambodia (p = 0.43) ( Figure 5). Increased Pvs230D1M IgG Titers Correlate With Decreased Hemoglobin in Brazil, but Not in Cambodia Despite its historical designation as "benign tertian malaria, " P. vivax has received increased attention as a cause of severe sequelae, including severe anemia (24,25). We assessed whether P. vivax antibody levels are inversely correlated with hemoglobin levels, to support the hypothesis that anemia in either of these populations may be due to P. vivax infections. Antibodies elicited in response to Pvs230D1M were negatively correlated with hemoglobin levels in Brazilian subjects (r = −0.3906, p = 0.0168), but not in Cambodian subjects ( Figure 6). Increased PvDBP-RII and PvCSP IgG Titers Correlate With Decreased Hemoglobin in Brazil To assess whether the correlation of hemoglobin and antibody titers is specific to Pvs230D1M IgG, we analyzed the relationships with P. vivax proteins PvCSP (sporozoite stage protein) and PvDBP-RII (merozoite stage protein). The seroprevalence of antibodies with specificity for PvDBP-RII was 67.1% (47/70 samples) in Brazil and 48.1% (38/79 samples) in Cambodia, and for PvCSP 11.4% (8/70 samples) and 41.8% (33/79 samples) respectively (Figure 7). Hemoglobin levels negatively correlated with PvDBP-RII (r = −0.4100, p = 0.0086) and PvCSP (r = −0.3554, p = 0.0247) IgG titers in Brazil (Figure 8), but no correlations were seen in Cambodia, indicating that the relationships to hemoglobin are similar for seroreactivities against liver stage, blood stage, and sexual stage antigens within the two study populations. Antibody levels against liver and Table S1. blood stage proteins did not significantly correlate with age in Brazil or Cambodia (Figure S1). Ortholog of Pvs230D1M, in P. vivax-Infected Subjects We investigated whether antibody responses during P. vivax infection might also be reactive against Pfs230D1M. We found that 7.2% of the sera from P. vivax-infected subjects from Brazil and 16.5% from Cambodia had detectable antibody against Pfs230D1M (Figure 7). Concurrent Antibody Responses to Pvs230D1M and Pfs230D1M Are Not Due to Shared Epitopes Due to similarities between Pvs230D1M and Pfs230D1M protein sequences, and the fact that P. falciparum malaria cases are also present in Brazil and Cambodia, we examined whether subjects may have developed antibody responses to both Pvs230D1 and Pfs230D1. Three (4.3%) subjects in Brazil and six (7.6%) subjects in Cambodia presented with concurrent antibody responses to Pvs230D1 and Pfs230D1. We assayed Pvs230D1M-depleted sera to investigate whether Pfs230D1M titers resulted from cross-reactive epitopes with Pvs230D1M. After depletion assay to remove antibodies specifically generated against Pvs230 (Figure S2), Brazilian and Cambodian samples maintained IgG levels to Pfs230D1M comparable to predepletion levels (Figure 9), suggesting that responses were not due to cross-reactive epitopes. Pfs230D1M antibody titers may therefore be due to microscopically undetected co-infection with P. falciparum or previous exposure to P. falciparum in these populations. DISCUSSION Antibodies to sexual stage parasites can be induced in response to infection (9,10,26). Compared to P. vivax, the antibody response to P. falciparum sexual stages is better characterized and it is known that children and adults in endemic areas acquire an immune response to Pfs230 (8). However, the naturally acquired response to Pvs230 has not been characterized, despite P. vivax being responsible for the majority of malaria cases in Latin America and in Southeast Asia (1). Understanding adaptive immune responses to P. vivax antigens present in sexual-stage parasites in the mosquito and human host can contribute to development of transmission-blocking strategies. In the current study, the prevalence of antibodies against domain 1 of Pvs230 (Pvs230D1M) during P. vivax infection was 27.1% in Brazil and 26.6% in Cambodia. Similarly, previous studies have shown that the ortholog Pfs230 reacted to sera from 28.6% of malaria-exposed adults in an area of seasonal transmission in Burkina Faso and from 20.7% of P. falciparum-infected donors in a low endemic area of Tanzania (8, 10). Although anti-Pfs230 antibody activity can be enhanced by complement (27,28), information on IgG subclasses generated against Pvs230 in naturally infected humans has not been described. Previous work showed that sera from mice immunized with Pvs230 reduces the number of oocysts in midguts of mosquitoes fed with blood from P. vivax-infected subjects, and this reduction occurs in the presence or absence of complement (18). We evaluated whether the natural antibody response to Pvs230D1M in humans would be characterized by higher levels of complement-fixing IgG subclasses. In our analyses of Pv230D1M IgG subclass frequency, IgG1 and IgG3 were shown to be the predominant subclasses during malaria infection and these isotypes are known to fix complement (29)(30)(31)(32)(33). This suggests that the functional activity of naturally acquired anti-Pvs230 antibody might be enhanced by complement, but this requires further investigation. Although the correlation between age and Pvs230D1M IgG was not statistically significant, Pvs230D1M-specific antibody titers in Cambodia differed (p = 0.021) between 1-9 year old subjects vs. subjects ≥20 years old. These results need to be interpreted in the context of characteristics of the study sites. In Pursat province, Cambodia, exposure to malaria frequently occurs as a result of occupation and exposure is low in children. A longitudinal study must be conducted with a larger number of samples collected from high and low endemic areas to confirm the age-cumulative effect of Pvs230 IgG response and its relationship with high or low transmission areas. Previous studies have suggested that IgG responses against sexual stage P. falciparum proteins do not increase with age (8-10, 26). For FIGURE 9 | ELISA on Pvs230D1M-depleted sera. Pvs230D1M IgG levels were gradually reduced in sera by depletion assay (Figure S2), and then samples were submitted to ELISA against Pfs230D1M. Specificity of depletion was confirmed measuring antibody titers to PvDBP-RII in sera depleted for Pvs230 IgG ( Figure S3). example, a study performed in a low P. falciparum transmission area in Tanzania did not reveal correlations between antibodies generated in response to Pfs230 or to Pfs48/45 and age (10). P. vivax is associated with lower hemoglobin concentration and can cause severe malaria (34)(35)(36)(37). Here, we found an inverse correlation between anti-Pvs230D1M antibody titers and hemoglobin levels in Brazil. Confirming that low levels of hemoglobin were due to malaria, we observed the same correlation with PvDBP-RII and PvCSP IgGs (elicited in response to blood stage and pre-erythrocytic stage parasites, respectively). PvDBP-RII titers were higher in Brazil than in Cambodia, supporting our hypothesis that exposure in Rondônia state may be higher than in Pursat province, Cambodia. Intriguingly, PvCSP titers were higher in Cambodia than in Brazil, which may be attributable to the fact that the PvCSP recombinant protein was based on a parasite strain isolated in Iran (VK210), a country in Asia, closer to Cambodia than to Brazil. In Cambodia, no correlation was observed between antibody titers and hemoglobin levels. We hypothesize that high endemicity with more frequent infections in a region such as Brazil lowers hemoglobin levels and therefore negatively correlates with increased antibody levels, while low endemicity in a region such as Cambodia entails more sporadic infections with potentially less impact on hemoglobin levels. Of note, the ranges of hemoglobin levels were similar at the two study sites, as were the proportions of male and female subjects. Hemoglobin levels assessed prior to infection were not determined, since those samples were not available for this study. We found no relationship of anti-Pvs230D1M antibody to level of parasitemia in Brazil (r = 0.060, p = 0.6469) or Cambodia (r = 0.1193, p = 0.2948). Data on gametocytemia were not collected at the time of blood smear microscopy and therefore are not available for analysis. In future, it will be of interest to perform a longitudinal study, to evaluate serologic parameters identified before, during and after infection and to correlate P. vivax sexual stage antibody responses to gametocyte carriage. P. vivax-infected subjects from Brazil and Cambodia displayed antibodies against Pfs230D1M, the ortholog of Pvs230D1M in P. falciparum. ELISA on Pvs230D1M-depleted sera suggests that Pfs230D1M titers were produced in response to P. falciparum pre-exposure or co-infection. The Pfs230D1M IgG response was more frequent in Cambodia than Brazil, perhaps reflecting the greater proportion of malaria infections caused by P. falciparum in Cambodia vs. Brazil (58 vs. <10%) (1). Since P. falciparum infection is known to cause anemia (38,39), a mixed infection could influence the correlation of hemoglobin with antibody titers. A limitation in our study was that the low volume of plasma samples precluded determination of functional activity of Pfs230-purified IgG in Standard Membrane Feeding Assay (SMFA) that assesses the reduction of P. falciparum parasite transmission to mosquitoes. Our findings provide a first characterization of naturally acquired antibody responses to Pvs230 among P. vivax-infected subjects from regions of differing transmission intensity in Brazil and Cambodia. DATA AVAILABILITY STATEMENT Datasets generated for this study are included in the manuscript/Supplementary Files. Additional datasets are also available upon request. ETHICS STATEMENT The studies involving human participants in Brazil were reviewed and approved by the Centro de Pesquisa em Medicina Tropical (CAAEs: 0008.0.046.000-11, 0449.0.203.000-09) and the Ethics Committee of the Federal University of Minas Gerais (CAAE: 27466214.0.0000.5149), Brazil. The human study in Cambodia was approved by the Institutional Review Board (IRB), NIAID, NIH, and National Ethics Committee for Human Research (NECHR), Cambodia (ClinicalTrials.gov Identifier: NCT00663546). Written informed consent was obtained from each participant. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. ACKNOWLEDGMENTS BT was thankful to the Office of Intramural Training and Education/NIH for mentoring and financial support. J. Patrick Gorres proofread and edited the manuscript. Daniel A. da Silva e Silva provided support for R programming.
2019-10-04T13:19:27.659Z
2019-10-04T00:00:00.000
{ "year": 2019, "sha1": "f8103ec604839ce305321f25dea0475f6186ec60", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.02295/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9024ae4e2f3f11f07e3734a8114ec0f2bca5fbeb", "s2fieldsofstudy": [ "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
59191566
pes2o/s2orc
v3-fos-license
Electronic peer review? -{ report of the Electronic Libraries Programme ESPERE project) The Electronic Submission and Peer Review project is one of approximately 60 projects funded by the Joint Information Systems Committee (JISC) of the Higher Education Funding Councils, as part of its Electronic Libraries Programme (eLib). ESPERE is particularly concerned with the possibility of using electronic methods for peer review, initially for biomedical articles. The project is jointly run by the University of Ulster and the Society for Endocrinology and has seven learned society publishers as its partners. Introduction lhe Electronic Submission andPeer Review project is one of approximately 60 projects funded :r the Joint Information Systems Committee (JISC) of the Higher Education Funding Councils, as :.rn of its Electronic Libraries Programme (el-ib). ESPERE is particularly concerned with the :ossibility of using electronic methods for peer review, initially for biomedical articles. The project :' -jointly run by the University of Ulster and the Society for Endocrinology and has seven learned iociety publishers as its partners. Peer review is an essential part of academic activity. In effect it is a quality assurance scheme which -' alidates research by subjecting it to the scrutiny of other researchers in the same field of study. For , tvpical journal the peer review process starts with the receipt of an article from an author and ;Lrntinues with the distribution of copies of the article to at least two academics who are considered to be experts in the field of study concerned. The reports received from these referees are then --onsidered by the editor of the journal and a decision is made as to whether the articie can be eccepted forpublication. In fact direct acceptance is rare and very few articles are published without trr]€, or even several, revisions. However in all cases peer review consists of the following stages: l. Receipt of article by editor or publisher Choice of referees (by the editor) Mailing the papers to referees Decision based on one or more referee/s' reports and the editor's judgement Communicating the decision (acceptable/acceptable after revision/rejection) to the author Keeping track of the peer review process is a complex activity, particularly if a large number of papers are published each month, since many papers will be at a different stage in the process simultaneously. Even for a smali journal which is only produced quarterly it can be hard to keep track of individual papers. Authors contacting the editor are often surprised to find that he or she cannot tell them its status immediateiy! Most publishers use an article handling database so that they can check the status of any given paper for an individual author and maintain control over the refereeing process by producing regular reports. }llost referees either fax their reports or use electronic mail (email) and some authors are now in a position to send their articles by email. However, relatively few journal publishers in biomedical subjects are currently accepting or indeed encouraging the use of this method. There is considerable concern about the quality of graphics and also the difficulty of maintaining special characters (eg Greek letters, accented characters etc) across multiple word processors and platforms. Most journal publishers accept (some require) authors' word processor files but they are not anxious to receive these as email attachments. 1 t. l. ESPERE and email The ESPERE project was originally conceived as an email version of the current system. However, even a limited exploration into the world of email has indicated that as a medium for the exchange of documents it is not ideal. The user interface of much of the software which deals with email is poor and off-putting for all but the dedicated enthusiast. A questionnaire completed by members of el-ib was hardly encouraging (Email survey 1996) (1). Almost all the respondents had had to work hard to send documents to others, many had failed or were too concerned about failure to try and the general consensus was that it was fine between consenting parties ! In other words if you find out first what someone else's system can deal with, and you have a reasonable level of expertise yourself, you can send even large files around. This is hardly an ideal situation and though it will doubtless improve immensely over the next year, with the newer versions of browsers like Netscape and Internet Explorer becoming integrated with mail software, the number of different systems currently in use makes it difficult for the time being. The World Wide Web As a result of these findings the focus has moved to the user-friendly side of the Internetthe World Wide Webas a possible interface for the transfer of articles, but also as a possible medium for the further development of the peer review system. The Web was originally used by computer scientists often students in their early twentiesbut as recent surveys have shown (Kehoe & Pitkow 19961rzt its use is now much more widespread across both subject specialities and age groups. It has the huge advantage of being an interactive medium. Thus information can be provided and requested. Since the nature of peer review is essentially a two way process this has considerable advantages. The ESPERE project will be looking into the use of the Web for transferring articles from authors to journals and for providing referees with access to those articles. The spectacular growth of the World Wide Web has led to a great interest in electronic journalsdocuments which can be accessed from the researcher's desktop -and the numbers of these journals are increasing all the time. There was some reluctance at first to submit articles to these journals, partly because the rnedium was new and also because they did not necessarily include a peer review process. However, the importance of this method of quality control was soon recognised (Harnad 1995;i:r and their growth has continued. Many now operate a peer review process: Association of Research Library statistics (ARL 1 99 1 -;t+r show that the number of peer reviewed online j ournals and newsletters increased from 73 in 1994 to 5ll in 1996 and this trend is set to continue. A few journals, such as theMedical Journal of Australia(5), Behaviour and Brain Sciences(6)and the Journal of Interactive Media in Education(?) are experimenting with novel methods of peer review which include making articles avaiiable before publication for general comment and in some cases providing the opportunity for contributors to have a dialogue with the authors. Attitudes to electronic peer review Despite these interesting developments most journal articles, even those intended for online journals and particularly those that include graphics, are still supplied to the journal publisher via the postal system (now quaintly known as 'snail mail' to computer enthusiasts). Articles are posted to referees all over the world even if the reports are now sent back by email. The ESPERE, project has been investigating the many aspects of the peer review system and in particular the attitudes of authors, referees and learned society publishers to electronic methods. This was done by a combination of interviews, focus group and questionnaire (ESPERE Stage One Report 1996)(8). There was widespread enthusiasm for eiectronic peer review and both authors and referees pointed out the numerous benefits of such a system. These include a fairer and more global approach to refereeingif it was as ea.sy (and as fast) to ask an academic anywhere in the world to referee a paper this might raise the standard of refereeing and also give all countries equal access to journals. The cost of providing many copies of graphical material for review was mentioned by many as a serious problem. In fact much of the graphical material is now scanned and is therefore available as electronic files and the use of digital cameras which produce photographs as electronic files is likely to increase. Many departments are maintaining photographic images on Kodak PhotoCDs and databases of images are now available through the Web. Aithough few referees were keen to edit articles on screen, most were prepared to print out articles for review while they were away from their desk (referees frequently read articles while travelling). Some referees complained that the cost of this would be transferred to them. However, those who were also authors (most of them) would benefit equally by not having to supply four copies of their own article. Publishers were also enthusiastic because of the opportunities to streamline systems. However, they realised that these developments would require some major changes and retraining of editorial staff, which would incur substantial costs even if these might be recouped later in increased efficiency.
2019-01-25T15:59:22.506Z
2013-10-26T00:00:00.000
{ "year": 2013, "sha1": "1a71a44b5eacf4ea788c5d08edab95114fd502d5", "oa_license": "CCBYNC", "oa_url": "https://lirgjournal.org.uk/index.php/lir/article/download/370/400", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ed2aa5200f3e7145c2d11c052e2b7f32a07c1182", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Computer Science" ] }
263169036
pes2o/s2orc
v3-fos-license
Pseudotyping Improves the Yield of Functional SARS-CoV-2 Virus-like Particles (VLPs) as Tools for Vaccine and Therapeutic Development Virus-like particles (VLPs) have been proposed as an attractive tool in SARS-CoV-2 vaccine development, both as (1) a vaccine candidate with high immunogenicity and low reactogenicity and (2) a substitute for live virus in functional and neutralization assays. Though multiple SARS-CoV-2 VLP designs have already been explored in Sf9 insect cells, a key parameter ensuring VLPs are a viable platform is the VLP spike yield (i.e., spike protein content in VLP), which has largely been unreported. In this study, we show that the common strategy of producing SARS-CoV-2 VLPs by expressing spike protein in combination with the native coronavirus membrane and/or envelope protein forms VLPs, but at a critically low spike yield (~0.04–0.08 mg/L). In contrast, fusing the spike ectodomain to the influenza HA transmembrane domain and cytoplasmic tail and co-expressing M1 increased VLP spike yield to ~0.4 mg/L. More importantly, this increased yield translated to a greater VLP spike antigen density (~96 spike monomers/VLP) that more closely resembles that of native SARS-CoV-2 virus (~72–144 Spike monomers/virion). Pseudotyping further allowed for production of functional alpha (B.1.1.7), beta (B.1.351), delta (B.1.617.2), and omicron (B.1.1.529) SARS-CoV-2 VLPs that bound to the target ACE2 receptor. Finally, we demonstrated the utility of pseudotyped VLPs to test neutralizing antibody activity using a simple, acellular ELISA-based assay performed at biosafety level 1 (BSL-1). Taken together, this study highlights the advantage of pseudotyping over native SARS-CoV-2 VLP designs in achieving higher VLP spike yield and demonstrates the usefulness of pseudotyped VLPs as a surrogate for live virus in vaccine and therapeutic development against SARS-CoV-2 variants. Introduction COVID-19, the disease caused by the SARS-CoV-2 coronavirus, has led to a significant global burden over the last 3.5 years, totaling >760 M confirmed cases and ~7 M deaths worldwide [1].Early development and rollout of vaccines played key roles in protecting against COVID-19 and reducing transmission of SARS-CoV-2.To date, more than 5.5 billion humans have received at least one dose of a COVID-19 vaccine, while more than 13 billion doses have been administered worldwide since late 2020 [1].Despite the initial success of SARS-CoV-2 vaccines, primarily mRNA lipid nanoparticles (LNPs), key challenges remain for future vaccine development.Both the high mutability [2,3] and transmissibility [4,5] of SARS-CoV-2, resulting in the rapid emergence of more drifted variants [6], coupled with the waning immunity in humans previously vaccinated or infected [7][8][9], have necessitated the rollout and update of booster vaccines in efforts to reduce more severe outcomes.However, given the continual decline in booster vaccine uptake [10] and the continued circulation of the virus for the foreseeable future [1], there is a critical need to develop next-generation SARS-CoV-2 vaccines. In addition to the need for more effective vaccines, the SARS-CoV-2 vaccine and therapeutic development also face key safety challenges.Working with SARS-CoV-2 virus requires biosafety level 3 (BSL-3) containment, severely limiting the number of experiments that can be performed due to high cost and limited availability of facilities [11].To address this challenge, human immunodeficiency virus (HIV-1), murine leukemia virus (MLV), and vesicular stomatitis virus (VSV) pseudotyped with the SARS-CoV-2 spike protein have been most commonly used as substitutes for live virus to study binding properties and quantify neutralizing antibody titers, among other applications [12].Despite their utility, there are several limitations for pseudovirus systems that are still replication competent, most notably safety concerns and low yields for the HIV-1 platform, morphology mismatch between VSV pseudovirus (bullet) and SARS-CoV-2 (spherical), as well as broader difficulties in quantifying pseudovirus titers and ensuring similar antigen surface density to that on the SARS-CoV-2 virus [12,13].Taken together, there is a great need for more easily characterizable biological tools such as viral cDNA technologies [14,15], virosomes [16], and virus-like particles [17] that can emulate native SARS-CoV-2 virus. Virus-like particles (VLPs) represent an attractive platform to serve a dual purpose in SARS-CoV-2 vaccine development, offering several unique advantages both as a vaccine candidate and as a replacement for live virus in assays.SARS-CoV-2 VLPs are commonly produced by co-expressing spike (S), envelope (E), and membrane (M) proteins in a host expression system [18][19][20][21].They mimic the structure of the native SARS-CoV-2 virus but lack viral RNA and the ability to replicate.In contrast to the mRNA LNP technologies used for SARS-CoV-2 vaccines, VLPs have a much more favorable reactogenicity profile, in line with more traditional vaccine technologies [22].Another advantage is the rapid production timeline (2-3 months) of VLPs [23][24][25], which can address the need for swift update of the vaccine should new variants arise.Furthermore, VLPs are highly immunogenic, and performed well as a booster in mice previously vaccinated with SARS-CoV-2 mRNA LNPs, eliciting slightly higher antibody levels with greater avidity compared to an mRNA LNP booster [26].Finally, the molecular mimicry of VLPs to native virus makes them a useful biological tool to replace live virus in vaccine and therapeutic development, allowing for binding and neutralization assays to be carried out safely at biosafety level 1 (BSL-1) conditions. While several host expression systems have been investigated to produce SARS-CoV-2 VLPs, including ones utilizing mammalian [17,[27][28][29][30] and plant cells [31,32], the baculovirus expression vector system (BEVS) in insect cells is a particularly advantageous platform due to its ability to achieve high expression levels of recombinant proteins [18,19,21,25,33,34].A variety of designs for SARS-CoV-2 VLPs have been produced in insect cells, with most incorporating the S protein in combination with both E and M structural proteins to form budded VLPs [18,19,21,34].Despite the successful demonstration that expressing all three proteins can lead to the production of VLPs, it is unclear if both E and M are necessary for VLP production and how the inclusion of each of these proteins affects the overall VLP spike yield, an important parameter that is largely unreported in SARS-CoV-2 VLP studies.Moreover, though a limited number of studies [35,36] have produced VLPs towards the SARS-CoV-2 variants of interest using mammalian cells, insect cell SARS-CoV-2 VLPs to date have only incorporated the S protein from the ancestral SARS-CoV-2 strain, and VLPs based on the major circulating variants have yet to be explored. In this study, we sought to determine the minimum requirement for native SARS-CoV-2 VLP formation in Sf9 insect cells and quantify the resulting VLP spike yield.Our data demonstrated that co-expressing S protein with either E or M protein resulted in VLP formation, but the VLP spike yield was lower compared to co-expressing all three proteins.However, regardless of the combination, all three showed very low VLP spike yields (<0.1 mg/L).To overcome this limitation, we formed pseudotyped VLPs by coexpressing an S-HA fusion protein with influenza M1, which improved the VLP spike yield approximately fivefold.This improvement further translated to VLPs with a greater anti-gen density (~96 S monomers/VLP), closely resembling that of native SARS-CoV-2 virus (72-144 S monomers/virion).This pseudotyping strategy also led to the successful production of VLPs for the major circulating variants, including alpha (B.1.1.7),beta (B.1.351),delta (B.1.617.2), and omicron (B.1.1.529).More importantly, the pseudotyped wild-type and variant VLP spike proteins were all shown to be functional, exhibiting differential affinities for binding with ACE2.Finally, we demonstrated the utility of VLPs to test neutralizing antibody activity in a simple, acellular ELISA assay, highlighting the usefulness of VLPs as SARS-CoV-2 virus surrogates for vaccine and therapeutic development. Sf9 Insect Cells Support SE, SM, and SEM VLP Formation with Low Spike Yield The majority of SARS-CoV-2 VLP designs reported to date have co-expressed the spike (S), envelope (E), and membrane (M) proteins in insect cells [18,19,21,34], or S, E, and M proteins, along with the nucleoprotein (N) in mammalian cells [17,[27][28][29][30].While these designs all successfully produced VLPs, it is unclear if either E or M protein on its own can support the formation of S-decorated VLPs and if the resulting VLP spike yield is increased as a result of expressing fewer number of recombinant proteins.To this end, three baculovirus vectors were generated to express S protein in combination with E and/or M proteins: SE, SM, and SEM (Figure 1A).All protein sequences were derived from the Wuhan-Hu-1 SARS-CoV-2 strain (accession #: NC_045512).Sf9 cells were then infected with the baculovirus vectors at a multiplicity of infection (MOI) of 3. At 72 h post infection, particles from the culture supernatants were harvested and processed for transmission electron microscopy (TEM) analysis (see Methods).As shown in Figure 1B, TEM analysis revealed the formation of spherical particles ~80-130 nm in diameter for all three constructs, consistent with the morphology of native SARS-CoV-2 virions (~60-140 nm diameter [37]).These particles also showed binding with multiple anti-S immunogold particles, indicating that the S protein was successfully incorporated into VLPs.Western blot analysis of these VLPs further showed that the S, E, and M proteins could be detected in the intended combinations for the SE, SM, and SEM VLPs and migrated according to their expected molecular weight (180 kDa, 12 kDa, and 25 kDa, respectively) (Figure 1C).Taken together, these data demonstrate that the formation of SARS-CoV-2 VLP in Sf9 cells does not require co-expressing S protein with both E and M proteins, and Sf9 cells support the formation of SE, SM, and SEM VLPs. As the SE and SM VLPs require the Sf9 cells to express one less recombinant protein compared to the SEM VLP, we next examined if this leads to higher spike yields in SE and SM VLPs.The S protein in VLPs was quantified by Western blot analysis using a standard curve generated from purified S protein (Figure S1A).The SE, SM, and SEM VLP spike yields ranged from 0.04-0.08mg/L, with SEM VLP having the highest spike yield, followed by SM and SE (Figure 1D).However, none of the yields were statistically different from each other (p > 0.05).Notably, the spike yields for all three native VLP constructs reported here are markedly low, particularly when compared to the influenza VLP hemagglutinin (HA) yield, which typically exceeds 1 mg/L [38].When the S protein in cell lysates was quantified using Western blot (Figure S1B), all three VLP constructs showed high S protein expression levels (~18-20 mg/L) (Figure 1E), comparable to that of HA protein [39,40].This indicates that the cellular expression of the S protein itself is not the cause of low VLP spike yield.Therefore, other factors severely restrict the formation of native SARS-CoV-2 VLPs in Sf9 cells.The VLP formation efficiency was evaluated using the % of S protein incorporated into VLPs, defined as the VLP spike yield divided by the cellular S protein expression level.As shown in Figure 1F, only ~0.2-0.4% of the cellular S protein expressed was incorporated into VLPs (Figure 1F).This suggests that native SARS-CoV-2 VLP formation is incredibly inefficient in Sf9 cells.It has been shown that the E and M proteins alter the secretory and glycosylation pathways in mammalian cells, resulting in the retention of S protein intracellularly [41].Given the high influenza VLP HA yield in Sf9 cells [38], we hypothesized that a pseudotyping strategy based on influenza proteins would improve the SARS-CoV-2 VLP spike yield.CoV-2 VLP formation is incredibly inefficient in Sf9 cells.It has been shown that the E and M proteins alter the secretory and glycosylation pathways in mammalian cells, resulting in the retention of S protein intracellularly [41].Given the high influenza VLP HA yield in Sf9 cells [38], we hypothesized that a pseudotyping strategy based on influenza proteins would improve the SARS-CoV-2 VLP spike yield. Pseudotyping Improves SARS-CoV-2 VLP Spike Yield Pseudotyping using influenza proteins was previously employed for SARS-CoV-1 VLPs in Sf9 cells, leading to >twofold improvement in VLP spike yield (1 mg/L) [42] compared to that of SARS-CoV-1 SEM VLPs [43].A similar strategy was more recently utilized for SARS-CoV-2 VLPs in mammalian [44] and Sf9 insect cells [33,45], though the VLP spike yields were not reported.To determine if pseudotyping similarly improves SARS-CoV-2 VLP spike yield, a baculovirus vector was created to express the SARS-CoV-2 S ectodomain (aa 1-1213) fused to the transmembrane (TM) and cytoplasmic tail (CT) domains of an H1N1 influenza HA protein (accession #NP_040980) in combination with the influenza matrix protein M1 (denoted SHAM1, Figure 2A).For simplicity, we refer to this influenza pseudotyping strategy as "pseudotyping" throughout, including designs from other studies [44,45] that used the same HA domains but from different influenza strains.Similar to SE, SM, and SEM VLPs (Figure 1B,C), SHAM1 VLPs exhibited a spherical morphology and showed binding with multiple anti-S immunogold particles (Figure 2B).As shown in Figure 2C, Western blot analysis revealed that the S-HA and M1 proteins were incorporated into the VLPs with the correct molecular weight (S-HA, 174 kDa and M1, 25 kDa).However, the SHAM1 VLPs were slightly larger in size (~100-200 nm diameter), similar to previously engineered pseudotyped VLPs for both SARS-CoV-1 (~160 nm diameter) [42] and SARS-CoV-2 (~80-200 nm diameter) [44,45].To examine if pseudotyping improves VLP spike yield, S protein quantification was performed as described above Pseudotyping Improves SARS-CoV-2 VLP Spike Yield Pseudotyping using influenza proteins was previously employed for SARS-CoV-1 VLPs in Sf9 cells, leading to >twofold improvement in VLP spike yield (1 mg/L) [42] compared to that of SARS-CoV-1 SEM VLPs [43].A similar strategy was more recently utilized for SARS-CoV-2 VLPs in mammalian [44] and Sf9 insect cells [33,45], though the VLP spike yields were not reported.To determine if pseudotyping similarly improves SARS-CoV-2 VLP spike yield, a baculovirus vector was created to express the SARS-CoV-2 S ectodomain (aa 1-1213) fused to the transmembrane (TM) and cytoplasmic tail (CT) domains of an H1N1 influenza HA protein (accession #NP_040980) in combination with the influenza matrix protein M1 (denoted SHAM1, Figure 2A).For simplicity, we refer to this influenza pseudotyping strategy as "pseudotyping" throughout, including designs from other studies [44,45] that used the same HA domains but from different influenza strains.Similar to SE, SM, and SEM VLPs (Figure 1B,C), SHAM1 VLPs exhibited a spherical morphology and showed binding with multiple anti-S immunogold particles (Figure 2B).As shown in Figure 2C, Western blot analysis revealed that the S-HA and M1 proteins were incorporated into the VLPs with the correct molecular weight (S-HA, 174 kDa and M1, 25 kDa).However, the SHAM1 VLPs were slightly larger in size (~100-200 nm diameter), similar to previously engineered pseudotyped VLPs for both SARS-CoV-1 (~160 nm diameter) [42] and SARS-CoV-2 (~80-200 nm diameter) [44,45].To examine if pseudotyping improves VLP spike yield, S protein quantification was performed as described above (Figure S1A).As expected, the SHAM1 VLP spike yield showed a significant improvement of ~fivefold to ~0.4 mg/L (Figure 2D).This yield is more comparable to the influenza VLP HA yield [38] as well as the SARS-CoV-1 VLP S-HA yield [42]. Notably, the cellular S protein expression level of SHAM1 (~20 mg/L) was not significantly different than that of SEM (Figure 2E), suggesting that the increased VLP spike yield was not a result of improved S protein expression, but was rather driven by more efficient VLP formation from influenza pseudotyped S-HA and M1 proteins.Indeed, the % of S protein incorporated into pseudotyped VLP was ~1.9%, representing a ~fivefold improvement compared to SEM VLP (Figure 2F). Pseudotyped Alpha, Beta, and Delta VLPs Show Higher Spike Yield Than Omicron VLP designs in insect cells thus far have only incorporated the S protein from the ancestral SARS-CoV-2 strain, and VLPs based on the major circulating variants have yet to be explored using the baculovirus expression vector system in Sf9 cells.Given the rapid emergence of new SARS-CoV-2 variants, we next sought to produce pseudotyped VLPs for several key variants reported to date, including alpha (B.1.1.7),beta (B.1.351),delta (B.1.617.2), and omicron (B.1.1.529).These variants contain mutations in the receptor binding domain (RBD) of the S protein, ranging from as few as 1 mutation in alpha to as many as 15 mutations in the case of omicron [5].To determine if these mutations still allow for the formation of pseudotyped VLPs in Sf9 cells, the protein sequences of the variant S RBDs were cloned in place of the Wuhan-Hu-1 (denoted as "WT") S protein RBD using the SHAM1 baculovirus vector as backbone (Figure 3A).Following infection with respective baculovirus vectors, particles were harvested and prepared for TEM and Western blot Due to the higher % S protein incorporated into SHAM1 VLPs compared to native SARS-CoV-2 VLPs (Figures 1F and 2F), we hypothesized that this improved S protein incorporation would result in VLPs with greater spike antigen density (i.e., the number of S monomers per VLP).Using the VLP spike yield (Figure 2D) and the total number of VLPs obtained using nanoparticle tracking analysis (see Methods), the spike antigen density of SEM VLP was determined to be ~22 S monomers/VLP (Figure 2G).In contrast, SHAM1 VLP spike antigen density was ~96 S monomers/VLP; thus, incorporating S-HA and M1 proteins into VLPs increased the antigen density by >fourfold.This improvement resulted in pseudotyped VLPs with a spike antigen density similar to that of native SARS-CoV-2 virus (~72-144 S monomers/virion [37,[46][47][48]).There are several potential factors that may explain the greater antigen density of influenza pseudotyped VLPs compared to the native SARS-CoV-2 VLPs.Influenza M1 is known to be a major driving force in influenza virus budding due to its high degree of oligomerization and its strong association with the cytoplasmic tail of influenza HA at the plasma membrane [49].In contrast to the E and M proteins, M1 is not known to alter the secretory pathway and retain other proteins.In addition, foreign transmembrane domains inserted into influenza HA have been previously shown to affect its folding into trimers and transport to the plasma membrane [50].Future work is needed to elucidate the structural differences between the native spike and S-HA fusion proteins that might affect the conformation and in turn immunogenicity. Despite the successful production of several pseudotyped SARS-CoV-2 VLPs using mammalian [44] and insect [33,45] expression systems, the benefit of pseudotyping in terms of vaccine efficacy is unclear.One design using HA TM/CT and M1 sequences derived from an H5N1 influenza virus resulted in SHAM1 VLPs, eliciting an inferior antibody response in mice compared to the corresponding SHA VLPs based on S-HA alone [45].This is surprising given that M1 is known to promote influenza VLP formation in Sf9 cells [51,52], and suggests that other factors such as differences in VLP morphology or S-HA antigen density with or without M1 may explain this result.Moreover, using codon-optimized S ectodomain in these same two constructs showed the opposite effect where SHAM1 VLPs resulted in superior immunogenicity compared to SHA VLPs [45].Further, while none of these four VLPs protected mice against lethal SARS-CoV-2 viral challenge [45], another study [44] using the same SHAM1 VLP design and codon-optimized S ectodomain did.These discrepancies observed in the immune responses elicited by pseudotyped SARS-CoV-2 VLPs highlight the need to evaluate parameters for the VLP quality, such as VLP spike yield and VLP spike antigen density described in this study.As shown in Figure 2G, different VLP designs can lead to VLPs with significantly different antigen densities, which have been correlated with antibody titer and survival against viral challenge [53,54].In addition, the relative VLP purity may also play a role as each expression system can introduce different contaminants into the VLP sample.In the present work as well as previous studies, the baculovirus expression vector system in Sf9 cells results in the production of baculovirus vectors as well as VLPs [25].Due to their similarities in size and density, separation of baculovirus from VLP remains challenging [55], and a well-established metric to define VLP purity is presently lacking.Therefore, including quantitative parameters in VLP characterization will help better benchmark the vaccine efficacy of different VLPs from different studies. Pseudotyped Alpha, Beta, and Delta VLPs Show Higher Spike Yield Than Omicron VLP designs in insect cells thus far have only incorporated the S protein from the ancestral SARS-CoV-2 strain, and VLPs based on the major circulating variants have yet to be explored using the baculovirus expression vector system in Sf9 cells.Given the rapid emergence of new SARS-CoV-2 variants, we next sought to produce pseudotyped VLPs for several key variants reported to date, including alpha (B.1.1.7),beta (B.1.351),delta (B.1.617.2), and omicron (B.1.1.529).These variants contain mutations in the receptor binding domain (RBD) of the S protein, ranging from as few as 1 mutation in alpha to as many as 15 mutations in the case of omicron [5].To determine if these mutations still allow for the formation of pseudotyped VLPs in Sf9 cells, the protein sequences of the variant S RBDs were cloned in place of the Wuhan-Hu-1 (denoted as "WT") S protein RBD using the SHAM1 baculovirus vector as backbone (Figure 3A).Following infection with respective baculovirus vectors, particles were harvested and prepared for TEM and Western blot analysis as described above (Figures 1B and 2B), with one exception: anti-S2 antibody was used to ensure the same binding affinity for variant S protein detection and quantification.As shown in Figures 3B,C and S2, TEM, immunogold labeling, and Western blot analysis revealed that all four variant SHAM1 VLPs exhibited a spherical morphology with a diameter ranging ~100-200 nm, showed binding with multiple anti-S2 immunogold particles, and incorporated expected proteins with the correct molecular weight (S-HA, 174 kDa and M1, 25 kDa), respectively, similar to WT SHAM1 VLPs as observed in Figure 2B,C.Therefore, the pseudotyping strategy allows for the formation of alpha, beta, delta, and omicron VLPs in Sf9 cells. expression level.One other study using Vero-E6 cells reported reduced omicron S protein expression compared to WT [56], though the specific mutation (N679K) responsible for the observed lower yield resides outside the RBD, and was not included in our VLP design.One important implication of reduced omicron S protein expression during infection or potentially mRNA vaccination may lead to a less productive antibody response against the S protein, which may in turn lead to more breakthrough infections [56].Nevertheless, our data demonstrated that the baculovirus expression vector system in Sf9 cells is useful in evaluating and characterizing S protein expression and SARS-CoV-2 VLP formation.The resulting pseudotyped WT and variant VLPs further represent an attractive alternative to live viruses for the study of S protein binding and neutralization properties.Although all four variants successfully formed pseudotyped VLPs, the S protein band for omicron was much weaker compared to the other three variants (Figure 3C).The Western blot quantification using anti-S2 antibody (Figure S3A) showed that the omicron VLP spike yield (~0.07 mg/L) was ~sixfold lower than alpha, beta, and delta (~0.4 mg/L, Figure 3D).Interestingly, the omicron S protein cellular expression level (~8 mg/L) was ~2.5-fold lower than the other three variants (~20 mg/L, Figures 3E and S3B), suggesting that this reduced cellular expression was partially offsetting the benefit of pseudotyping in enhancing VLP formation.Indeed, the % of S protein incorporated into omicron VLP was ~0.9% (Figure 3F), which was not as high as the other variants but still >twofold higher than the native SARS-CoV-2 VLPs (Figure 1F). As the omicron S RBD has 15 mutations compared to WT (Figure 3A), it is unclear which mutations or specific combinations thereof are responsible for the lower S protein expression level.One other study using Vero-E6 cells reported reduced omicron S protein expression compared to WT [56], though the specific mutation (N679K) responsible for the observed lower yield resides outside the RBD, and was not included in our VLP design.One important implication of reduced omicron S protein expression during infection or potentially mRNA vaccination may lead to a less productive antibody response against the S protein, which may in turn lead to more breakthrough infections [56].Nevertheless, our data demonstrated that the baculovirus expression vector system in Sf9 cells is useful in evaluating and characterizing S protein expression and SARS-CoV-2 VLP formation.The resulting pseudotyped WT and variant VLPs further represent an attractive alternative to live viruses for the study of S protein binding and neutralization properties. Pseudotyped VLPs Are Functional and Bind ACE2 with Varying Affinity SARS-CoV-2 viral infection is initiated by the binding of S protein to its target receptor, angiotensin-converting enzyme 2 (ACE2) [57].ELISA has proven a useful method to rapidly assess the relative binding affinities between WT and several variant S protein RBDs to ACE2 [58,59].However, this assessment using VLPs, which would allow for binding of S protein to ACE2 in a more physiologically relevant condition, has not been explored yet.To this end, we next sought to evaluate the relative binding affinities of pseudotyped WT, alpha, beta, delta, and omicron VLPs to ACE2 using conventional sandwich ELISA, where VLPs are titrated on ACE2-coated plates and quantified with anti-S2 followed by anti-rabbit IgG-HRP antibodies (Figure 4A, left panel).Since the higher binding affinity of the delta RBD with ACE2 compared to WT RBD is well established [58,59], these two pseudotyped VLPs were tested first.As shown in Figure 4B, compared to the negative control influenza H1M1 VLPs, both WT and delta VLPs showed a dose-dependent response, indicating that the S protein in both VLPs is functional and can bind ACE2.However, there was no difference observed in the EC 50 (half maximal effective concentration) between WT and delta VLPs, in contrast to previous studies using soluble RBDs [58,59].A plausible explanation for this discrepancy is that, in the conventional sandwich ELISA, the interaction between ACE2 and S protein on VLPs is influenced by avidity (Figure 4A, left panel).Avidity effects have been shown to enhance the binding affinity >10-1000-fold [60,61], and may mask binding affinity differences depending on the experimental setup [62,63].Notably, the delta VLPs showed stronger binding than the WT VLPs, but this was evident only at lower concentrations of VLPs (≤2.5 µg/mL S protein) (Figure 4B).This observation further supports avidity effects on binding behavior in this conventional sandwich ELISA setup. Using Pseudotyped VLPs as Surrogates for Live Virus in a Neutralization Assay In addition to understanding virus binding properties, another important aspect of SARS-CoV-2 vaccine and therapeutic development is the screening of viral inhibitors and antibodies, particularly for their virus-neutralizing properties.However, neutralization assays remain challenging due to the BSL-3 requirement for experiments involving live To eliminate the influence of the avidity effect, we modified the ELISA as depicted in the right panel of Figure 4A.In this modified format, ACE2-Fc protein is titrated in wells containing the same amount of captured VLPs, which allows for the assessment of the monovalent interaction between S protein on VLPs and ACE2-Fc.A critical step here is ensuring that the same amount of VLPs based on S protein amount are captured in each well.To achieve this, 10 µg/mL of VLPs (S protein) was loaded, and the captured VLPs were quantified using anti-S2 followed by anti-rabbit IgG-HRP antibodies.All wells showed the same colorimetric signal (Figure S4), confirming that they contained the same amount of S protein on captured VLPs. Once the same amount of captured VLPs was confirmed, the modified ELISA was performed by titrating ACE2-Fc and detecting with protein A-HRP (Figure 4A, right panel).All five pseudotyped SARS-CoV-2 VLPs showed a dose-dependent response, indicating that the S protein in all VLPs is functional and can bind ACE2-Fc (Figure 4C).Based on the binding curves, EC 50 values were determined for each of the S:ACE2-Fc interactions.Compared to WT S:ACE2-Fc (EC 50 0.7 µg/mL), the delta S:ACE2-Fc EC 50 value was ~twofold lower (~0.3 µg/mL), indicating a stronger binding affinity between the delta VLP S protein and ACE2-Fc (Figure 4D).Beta behaved similarly to delta.In contrast, the binding of omicron to ACE2-Fc was significantly weaker than WT, with ~twofold greater EC 50.These data agree with previous studies using soluble RBDs [58,59], confirming the validity of the modified ELISA format for measuring the relative binding affinity.Additionally, our data revealed that the binding of alpha to ACE2-Fc was similar to beta.Interestingly, this result suggests that the N501Y mutation in S protein, common to both alpha and beta, drives the increase in binding affinity to ACE2 compared to WT.Despite also sharing the N501Y mutation, omicron showed much weaker binding affinity than WT.A computational modeling study suggested that nine other mutations in the omicron RBD would decrease its binding affinity to ACE2 [64].The weaker binding of omicron S protein to ACE2 implicates other interactions such as omicron's preference for cathepsin L instead of TMPRSS2 as the driving force behind its enhanced infectivity and transmissibility [65]. Using Pseudotyped VLPs as Surrogates for Live Virus in a Neutralization Assay In addition to understanding virus binding properties, another important aspect of SARS-CoV-2 vaccine and therapeutic development is the screening of viral inhibitors and antibodies, particularly for their virus-neutralizing properties.However, neutralization assays remain challenging due to the BSL-3 requirement for experiments involving live SARS-CoV-2 virus [66].As a result, pseudoviruses [67][68][69] or VLPs [21,35,36,70] have been used as virus surrogates in neutralization assays.In these studies, a reporter system (e.g., luciferase or GFP) is incorporated in the pseudovirus or VLP to evaluate if antibodies can block their ability to enter the target cell (e.g., ACE2-expressing cells).In the present work, we developed a simple, acellular neutralization assay based on ELISA (Figure 5A), allowing us to measure antibody neutralization activity against VLPs without relying on a reporter system.In the context of this assay, neutralization is defined as the ability of the neutralizing antibody to block the binding of VLPs to ACE2 [71,72].Briefly, 5 µg/mL of VLPs was preincubated with varying concentrations of a neutralizing monoclonal antibody (mAb) raised against WT S protein, and then loaded into ACE2-coated wells.After washing away unbound VLPs, captured VLPs were detected with anti-S2 followed by anti-rabbit IgG-HRP antibodies.As shown in Figure 5B, increasing concentration of neutralizing mAb prevented more VLPs from being captured by ACE2, leading to a reduction in signal.To quantify the percent of neutralization, the signal for a given neutralizing mAb concentration was normalized against the signal for VLPs preincubated without neutralizing mAb.Percent neutralization was then plotted as a function of mAb concentration (Figure 5C), and the half maximal inhibitory concentration (IC 50 ) was determined to compare the antibody neutralization against both WT and variant VLPs (Figure 5D). (Figure 5C), and the half maximal inhibitory concentration (IC50) was determined to compare the antibody neutralization against both WT and variant VLPs (Figure 5D).Overall, the neutralizing mAb blocked WT VLP binding to ACE2 most effectively, reaching >87% neutralization (Figure 5C), with the lowest IC50 value (0.03 µg/mL) among all VLPs tested (Figure 5D).This is expected, as the mAb tested in this study was raised against the WT S protein.Comparatively, the neutralizing mAb blocked the alpha and beta VLPs less effectively, with IC50 values ~2.7-and ~6.7-fold higher than that of the WT VLP, respectively (Figure 5D).This result indicates that in addition to the N501Y mutation, which is shared between alpha and beta variants, the E484K and/or K417N mutations also affect the neutralization activity of this mAb.For delta and omicron VLPs, <30% neutralization was observed at the highest concentration of neutralizing mAb tested (Figure 5C), and the IC50 values could not be determined (Figure 5D).This result suggests that the L452R and/or T478K mutations found in the delta variant nearly abolished the neutralization activity of the mAb tested.These results are consistent with the data from the manufacturer, which showed significantly less neutralization of delta and omicron pseudoviruses compared to WT in a cell-based microneutralization assay.Therefore, the ELISAbased acellular neutralization assay developed here can provide quantitative data on the efficacy of neutralizing antibodies.It is important to note that this assay evaluates the ability of neutralizing antibodies or inhibitors to block binding of VLPs to ACE2.Other Overall, the neutralizing mAb blocked WT VLP binding to ACE2 most effectively, reaching >87% neutralization (Figure 5C), with the lowest IC 50 value (0.03 µg/mL) among all VLPs tested (Figure 5D).This is expected, as the mAb tested in this study was raised against the WT S protein.Comparatively, the neutralizing mAb blocked the alpha and beta VLPs less effectively, with IC 50 values ~2.7-and ~6.7-fold higher than that of the WT VLP, respectively (Figure 5D).This result indicates that in addition to the N501Y mutation, which is shared between alpha and beta variants, the E484K and/or K417N mutations also affect the neutralization activity of this mAb.For delta and omicron VLPs, <30% neutralization was observed at the highest concentration of neutralizing mAb tested (Figure 5C), and the IC 50 values could not be determined (Figure 5D).This result suggests that the L452R and/or T478K mutations found in the delta variant nearly abolished the neutralization activity of the mAb tested.These results are consistent with the data from the manufacturer, which showed significantly less neutralization of delta and omicron pseudoviruses compared to WT in a cell-based microneutralization assay.Therefore, the ELISA-based acellular neutralization assay developed here can provide quantitative data on the efficacy of neutralizing antibodies.It is important to note that this assay evaluates the ability of neutralizing antibodies or inhibitors to block binding of VLPs to ACE2.Other cellular-based assays are needed to further demonstrate neutralization through the point of cellular fusion and entry.Nevertheless, combined with the advantages of its fast completion in less than a day and adaptability for high-throughput screening of antibodies and viral inhibitors, this assay adds another effective approach to the neutralization assay toolkit. In summary, pseudotyped VLPs produced in Sf9 insect cells are a promising dualpurpose platform in the fight against COVID-19.Compared to the critically low VLP spike yields of native SE, SM, and SEM VLPs produced in Sf9 insect cells, influenza pseudotyped VLP spike yields were significantly improved, resulting in VLPs with antigen density similar to that of the native SARS-CoV-2 virus.We successfully employed this pseudotyping strategy to produce VLPs incorporating the alpha, beta, delta, and omicron RBDs, the first example of variant VLPs produced in Sf9 insect cells.Despite the lower omicron VLP spike yield, we were able to demonstrate the functionality of all pseudotyped VLPs by quantifying their differential binding affinity to ACE2.Finally, we showed the utility of pseudotyped VLPs as virus surrogates in evaluating neutralizing antibody activity in a simple, acellular ELISA format.Taken together, pseudotyped VLPs produced in Sf9 cells represent a safe and effective tool that allows for the investigation of SARS-CoV-2 viral binding properties and antibody neutralization activity to be performed at BSL-1 facilities.This accessibility opens avenues for the wider research community to contribute to the collective endeavor in combatting COVID-19. Recombinant Baculovirus Generation The DNA sequences encoding the SARS-CoV-2 S, E, and M proteins were amplified from gBlock fragments purchased from Integrated DNA Technologies.First, the S gene was cloned into the BamHI/HindIII site in plasmid pFastBac Dual to create the intermediary plasmid pFastBac Dual-S.E and M genes were cloned into the XhoI/XmaI site in separate pFastBacDual-S plasmids to create plasmids pFastBacDual-SE and pFastBacDual-SM, respectively.The expression cassette for M including the p10 promoter, M gene, and HSV terminator was then PCR amplified from pFastBac Dual-SM and cloned into the AvrII site of pFastBac Dual-SE to create pFastBac Dual-SEM. Previously, influenza HA and M1 genes were cloned into the XbaI/HindIII and KpnI/XmaI site in plasmid pFastBac Dual, respectively, to create plasmid pFastBac Dual-H1M1 [73].This plasmid served as the backbone for all pseudotyped SARS-CoV-2 S/Influenza HA fusion plasmids.The S/HA fusion fragment was created using overlap extension PCR.First, the S ectodomain fragment (S-ECTO) and HA transmembrane and cytoplasmic tail domain fragment (HA_TMCT) were amplified from pFastBac Dual-S and pFastBac Dual-H1M1, respectively.S-ECTO and HA-TMCT fragments were then spliced and cloned into the XbaI/HindIII site of pFastBac Dual-H1M1, replacing the full-length HA gene to create pFastBac Dual-SHAM1. Receptor binding domain (RBD) mutations for the SARS-CoV-2 alpha (N501Y), beta (K417N, E484K, N501Y), and delta (L452R, T478K) variants were introduced into separate pFastBac Dual-SHAM1 plasmids using Quikchange mutagenesis [74] to create pFastBac Dual-SHAM1-Alpha, pFastBac Dual-SHAM1-Beta, and pFastBac Dual-SHAM1-Delta.The sequence encoding the first 685 amino acids of the S ectodomain including the omicron RBD sequence was amplified from a gBlock gene fragment and cloned into the XbaI/ApaI site in pFastBac Dual-SHAM1 plasmid to create pFastBac Dual-SHAM1-Omicron.The templates and primers used for all PCR reactions are listed in Table S1.All DNA sequences were confirmed using Sanger sequencing. The recombinant baculovirus genome (i.e., bacmid) was created by transforming each pFastBac Dual plasmid into DH10Bac via transposition.After confirming the recombination events using blue/white colony screening and PCR, the recombinant bacmids were purified using a PureLink HiPure Plasmid Miniprep kit (Invitrogen, Carlsbad, CA, USA).The purified bacmids were then transfected into Sf9 cells using Cellfectin II (Invitrogen) according to manufacturer's protocol to generate recombinant baculovirus P1 stocks, which were amplified in Sf9 cells to obtain high-titer P2 baculovirus stocks for use in protein expression and VLP production experiments. Cellular Expression and Protein Quantification Cellular expression of S, E, and M proteins as well as S/HA and M1 was confirmed using Western blot analysis of cell lysates using anti-S (40591-T62, Sino Biological US Inc., Wayne, PA, USA), anti-M (NBP3-07058, Novus Biologicals, Centennial, CO, USA), anti-E (NBP3-07060, Novus Biologicals), and anti-M1 (PA532253, Invitrogen).For variant S proteins, anti-S2 (40590-T62, Sino Biological US Inc.) was used, as the S2 domain was conserved across all variants.To quantify cellular yields of all S protein constructs, S protein standard (40589-V08B1, Sino Biological US Inc.) was used.Following primary antibody staining, alkaline phosphatase-conjugated anti-mouse or anti-rabbit IgG secondary antibody (Life Technologies) was used.NBT-BCIP (Thermo Fisher Scientific) was used to develop Western blots.Densitometric analysis of Western blots was performed using a Gel Doc EZ™ Imager (Bio-Rad, Hercules, CA, USA) to generate standard curves for S, which were then used to calculate the S cellular expression level. Virus-like Particle (VLP) Production and Characterization Influenza VLPs were produced from Sf9 cells infected at an MOI of 3 and harvested 72 hpi.Cell debris was removed from the supernatant by centrifugation at 300× g for 20 min followed by 10,000× g for 20 min.The cleared supernatant was ultracentrifuged at 150,000× g for 2 h, and the pellet containing VLPs was resuspended in PBS containing 40% glycerol.All centrifugation steps were carried out at 4 • C. The number of VLPs was quantified using a NanoSight NS300 particle tracking system (Malvern Panalytical, Malvern, UK).Specifically, VLPs were diluted in PBS to manufacturer's recommended concentrations prior to injection.Videos of 60 s in length were recorded for each sample, and the particle concentration was determined using the nanoparticle tracking analysis (NTA) software provided with the NS300 system.The amount of S in each VLP preparation was quantified by densitometric analysis of Western blots as described in the section above.VLP spike yield (defined as the amount of S protein in VLPs), was determined by densitometric analysis of Western blots as described above.For each VLP construct, the particle concentration and VLP spike yield are represented as the mean of three independent experiments.The spike antigen density (defined as the number of S monomers per VLP) was then determined by using the equation below: Spike Antigen Density = (VLP Spike Yield) The MW Spike is the molecular weight of the spike protein (180 kDa for native S protein, 174 kDa for S-HA fusion protein).N A is Avogadro's number.All appropriate conversion factors were used to calculate the antigen density in units of S monomers per VLP. VLPs were characterized by immunogold labeling analysis using transmission electron microscopy (TEM).Briefly, VLPs were absorbed on Ni grids (Electron Microscopy Sciences, Hatfield, PA, USA) and incubated with 20 ng/µL anti-S antibody for 1 h, followed by labeling with protein G-gold nanoparticle (15 nm)-conjugates (Electron Microscopy Sciences) at a concentration of 10 11 gold nanoparticles/mL for 30 min.Grids were stained with 2% phosphotungstic acid (PTA) and allowed to dry 1 h prior to TEM analysis on a JEM-1400 Transmission Electron Microscope, 80 kV (JEOL, Peabody, MA, USA). VLP Binding and Neutralization ELISA VLP binding to ACE2 was analyzed using ELISA.96-well MaxiSorp plates (BioLegend, San Diego, CA, USA) were coated with 2 µg/mL of recombinant human ACE2 (carrier free) (BioLegend) overnight at 4 • C.After washing 3× with ELISA wash buffer, wells were blocked with a 1× ELISA diluent (5× stock, Thermo Fisher Scientific) for 2 h at room temperature.VLPs containing 10 µg/mL S protein/well in 1× ELISA diluent were captured for 2 h at room temperature.After washing 3×, recombinant protein ACE2-Fc (BioLegend) was titrated from 0.004-10 µg/mL well for 2 h at room temperature.After washing 3× to remove unbound ACE2-Fc, detection was performed using HRP protein A (1:1000, BioLegend).After 5× washing, visualization was performed using TMB substrate (BioLegend).Absorbance was read at 620 nm using a SpectraMax M5 plate reader.EC 50 values were calculated with GraphPad Prism, fitting to a five-parameter logistic curve. To confirm that an equivalent amount of VLPs were captured for each sample in the above experiment, three VLP-captured wells for each sample were labeled with anti-S2 (1:1000, Sino Biological US Inc.), washed 3×, and detected with anti-rabbit IgG-HRP (1:2000, PI31460, Thermo Fisher Scientific) (Figure S4).Five times washing, visualization with TMB, and absorbance readings were performed as described above. Neutralization of VLPs was investigated using a similar ELISA setup.First, VLPs containing 5 µg/mL S were incubated with 0.0032-2 µg/mL SARS-CoV-2 (2019-nCoV) Spike-Neutralizing Monoclonal Antibody (40591-MM48, Sino Biological US Inc.) overnight at 4 • C. On the same day, plates were coated with 2 µg/mL ACE2/well as described above and stored overnight at 4 • C. The next day, preincubated VLPs were then loaded into the ACE2-coated wells for 2 h.After washing 3× to remove unbound (i.e., neutralized) VLPs, captured VLPs were labeled with anti-S2, detected with anti-rabbit IgG-HRP (1:2000), and visualized with TMB as described above.Percent neutralization of VLPs was calculated as the difference in A 620 signal for VLPs preincubated with and without neutralizing antibody divided by the A 620 signal for VLPs preincubated without neutralizing antibody.Following nonlinear fitting, IC 50 values were calculated with GraphPad Prism. Statistical Analysis Statistical analysis was performed using unpaired Student's t test.All data are represented as the mean of three independent experiments and error bars represent the standard error of mean (SE).* p < 0.05, ** p < 0.01, *** p < 0.001; not significant p > 0.05. Figure 1 . Figure 1.Characterizing the effects of SARS-CoV-2 structural proteins on VLP formation and spike yield.(A) Schematic of the recombinant baculovirus vectors used to produce SE, SM, and SEM SARS-CoV-2 VLPs in Sf9 cells.(B) Transmission electron microscopy (TEM) images showing anti-S immunogold-labeled VLPs.(C) Western blot analyses of SARS-CoV-2 proteins in VLPs.Quantification of (D) VLP spike yield and (E) cellular S protein expression level based on Western blot analyses.(F) % S protein incorporated in VLPs.For (D-F), data represent mean ± SE (n = 3, unpaired Student's t test, all p > 0.05, not significant). Figure 1 . Figure 1.Characterizing the effects of SARS-CoV-2 structural proteins on VLP formation and spike yield.(A) Schematic of the recombinant baculovirus vectors used to produce SE, SM, and SEM SARS-CoV-2 VLPs in Sf9 cells.(B) Transmission electron microscopy (TEM) images showing anti-S immunogold-labeled VLPs.(C) Western blot analyses of SARS-CoV-2 proteins in VLPs.Quantification of (D) VLP spike yield and (E) cellular S protein expression level based on Western blot analyses.(F) % S protein incorporated in VLPs.For (D-F), data represent mean ± SE (n = 3, unpaired Student's t test, all p > 0.05, not significant). 17 Figure 2 . Figure 2. Pseudotyping improves SARS-CoV-2 VLP spike yield and antigen density.(A) Schematic of the recombinant baculovirus vector used to produce pseudotyped SHAM1 VLPs in Sf9 cells.The SARS-CoV-2 spike ectodomain was fused to the influenza H1N1 HA transmembrane (TM) and cytoplasmic tail (CT) domains and co-expressed with influenza M1 protein.(B) Transmission electron microscopy (TEM) images showing anti-S immunogold-labeled SHAM1 VLPs with a range (~100-200 nm) of diameters.(C) Western blot analysis of spike (top) and M1 (bottom) proteins in VLPs.Quantification of (D) VLP spike yield and (E) cellular S protein expression level based on Western blot analysis.(F) % S protein incorporated in VLPs.(G) VLP antigen density reported as the number of S monomers per VLP.For (D-G), data represent mean ± SE (n = 3, unpaired Student's t test, ** p < 0.01). Figure 2 . Figure 2. Pseudotyping improves SARS-CoV-2 VLP spike yield and antigen density.(A) Schematic of the recombinant baculovirus vector used to produce pseudotyped SHAM1 VLPs in Sf9 cells.The SARS-CoV-2 spike ectodomain was fused to the influenza H1N1 HA transmembrane (TM) and cytoplasmic tail (CT) domains and co-expressed with influenza M1 protein.(B) Transmission electron microscopy (TEM) images showing anti-S immunogold-labeled SHAM1 VLPs with a range (~100-200 nm) of diameters.(C) Western blot analysis of spike (top) and M1 (bottom) proteins in VLPs.Quantification of (D) VLP spike yield and (E) cellular S protein expression level based on Western blot analysis.(F) % S protein incorporated in VLPs.(G) VLP antigen density reported as the number of S monomers per VLP.For (D-G), data represent mean ± SE (n = 3, unpaired Student's t test, ** p < 0.01). Figure 4 . Figure 4. Pseudotyped VLPs are functional and show binding to ACE2 with varying affinity.(A) Schematic of conventional sandwich ELISA vs. a modified ELISA.Corresponding binding curve for indicated VLPs is shown in (B,C), respectively.(D) Quantification of the EC50 based on nonlinear regression of binding curve data in (C).For (C,D), data represent mean ± SE (n = 3, unpaired Student's t test, * p < 0.05, *** p < 0.001). Figure 4 . Figure 4. Pseudotyped VLPs are functional and show binding to ACE2 with varying affinity.(A) Schematic of conventional sandwich ELISA vs. a modified ELISA.Corresponding binding curve for indicated VLPs is shown in (B,C), respectively.(D) Quantification of the EC 50 based on nonlinear regression of binding curve data in (C).For (C,D), data represent mean ± SE (n = 3, unpaired Student's t test, * p < 0.05, *** p < 0.001). Figure 5 . Figure 5.Using pseudotpyed VLPs as SARS-CoV-2 virus surrogates to measure antibody neutralization activity.(A) Schematic showing neutralization ELISA setup.VLPs were preincubated with a neutralizing antibody to prevent capture by ACE2.(B) Signal reduction indicating neutralization of indicated VLPs.(C) The percent neutralization was determined by comparing the signal from VLPs preincubated with or without neutralizing antibody.(D) IC50 calculated from data in (C) using nonlinear regression analysis.For (B,C), data represent mean ± SE (n = 3). Figure 5 . Figure 5.Using pseudotpyed VLPs as SARS-CoV-2 virus surrogates to measure antibody neutralization activity.(A) Schematic showing neutralization ELISA setup.VLPs were preincubated with a neutralizing antibody to prevent capture by ACE2.(B) Signal reduction indicating neutralization of indicated VLPs.(C) The percent neutralization was determined by comparing the signal from VLPs preincubated with or without neutralizing antibody.(D) IC 50 calculated from data in (C) using nonlinear regression analysis.For (B,C), data represent mean ± SE (n = 3).
2023-09-29T15:18:54.002Z
2023-09-27T00:00:00.000
{ "year": 2023, "sha1": "9d6881d001e60a5661319baa8808315341557ad1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/24/19/14622/pdf?version=1695804454", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "537f9d229b5e07eb092c650665bf1f804f179276", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266181074
pes2o/s2orc
v3-fos-license
Reproduction record of captive Sumatera elephant ( Elephas maximus sumatranus ) at Way Kambas National Park, Indonesia This research aims to determine reproductive data on Sumatran elephants ( Elephas maximus sumatranus ) in the ETC and ERU of Way Kambas National Park, Indonesia during 1988-2021. Data recorded from the elephant population at both locations (ETC and ERU) includes the number, gender of elephants, elephant calves, ages and birth dates. Calving intervals and service periods were calculated from calving records of cows with a minimum parity of two. Data collection produced data on 47 elephant calves from 13 female elephants with at least twice the parity and 12 primiparous cows. Elephant reproductive records at ETC and ERU were: age (37.44 ± 9.03 vs. 29.75 ± 3.30 years), parity (2.78 ± 1.09 vs. 2.50 ± 0.58), and age at first birth (18, 11 ± 3.92 vs. 17.75 ± 0.50 years). While the calving interval was 1857.56 ± 870.81 vs. 1833.00 ± 305.18 days, and service period respectively 1229.44 ± 846.18 vs. 1210.50 ± 283.59 days, respectively. It can be concluded that the calving interval and service period for captive elephants at ETC and ERU were not much different and were within the normal range. Young cows showed better reproductive efficiency than older cows. Monitoring calves is very important. Further assistance is needed to improve elephant mobile veterinary services, increase diagnostic laboratory capacity, and educate camp managers, veterinary assistants, and mahouts about elephant diseases, their monitoring and treatment. In addition, the use of reproductive technology such as monitoring ovulation using ultrasound and carrying out artificial insemination was expected to increase reproductive efficiency. INTRODUCTION The Sumatran elephant (Elephas maximus sumatranus) is a protected animal that is included in the International Union for Conservation of Nature's red list (Williams et al., 2020).Elephant reproduction is unique; on average, a female elephant will be pregnant for 20-23 months to calve an elephant weighing 90-120 kg (Brown, 2018).The feces of elephants functioned as fertilizer and helped spread plants in the forest.Good forest conditions can be supported by good animal life (Ong et al., 2023).Human-elephant conflict occurs as elephant habitat decreases due to the replacement by agricultural land and settlements.This is followed by a decline in the elephant population in the wild.It was estimated that the Sumatran elephant population had decreased from 2,400-2,800 in 2007 to 1,600-2,000 in 2017 (Directorate General of Ecosystem Natural Resources Conservation, 2020). The Way Kambas Forest area was designated as a protected forest area and elephant training center in 1924 and inaugurated as Way Kambas National Park (WKNP) in 1985 (Wijayanti, 2018).This nature reserve is located in the lowlands covering an area of 1,300 km 2 , and is one of Indonesia's Elephant Conservation Centers (ECC).Geographically, WKNP is located between 40°37' -50°16' South Latitude and between 105°33' -105°54' East Longitude in the southeastern part of Sumatra island, Lampung province (Rifanz, 2017).In the Way Kambas area, there is also an elephant hospital (EH) of Prof. Dr. Ir. H. Rubini Atmawidjaya, which is the first EH in Indonesia and Southeast Asia.This EH was founded with grant funds from the world conservation organization Australian Zoo together with Taman Safari Indonesia.Since it was founded in 2012 until now, EH has served animals, especially sick elephants, victims of disasters or conflicts.EH serves captive elephants at the Elephant Training Center (ETC) and Elephant Response Unit (ERU) (Islami et al., 2016).ETC is a sanctuary for elephants rescued from conflict, where they were trained.Additionally, in this place the elephants breed naturally.Meanwhile, ERU is an elephant training center to help drive away herds of wild elephants that enter the country site so they can return to the forest (Oelrichs et al., 2016). Captive Sumatran elephants in the WKNP are still mated naturally.Data on elephant reproduction in the ETC and ERU areas only recorded calving dates.The system of foraging from morning to evening makes it possible for wild male elephants to mate with ETC female elephants.A cow at ETC named Bunga contributed five Sumatran elephant calves to the population during 1999-2019 (Salsabila et al, 2017).There are no records regarding the reproductive efficiency of elephants.Therefore, this study aims to determine the calving interval and service period of Sumatran elephants (Elephas maximus sumatranus) in the ETC and ERU of Way Kambas National Park. MATERIALS AND METHODS Study was conducted at ETC and ERU at the National Park Way Kambas ETC.WKNP is located in Labuhan Ratu district, the Lampung Province, Indonesia (Figure 1), latitude -4° 54' 59.99" S, and longitude: 105° 44' 59.99" E and the map scale of WKNP is about 1:25000 km (Latitude, 2023).The landscape covers 125,621.30ha of swamp and lowland rainforest.In the rainy season, the temperature is 23-30°C, with a humidity of 70-100%; in the dry season, the temperature is 29-35.2°C,with a humidity of 57-94% (MCGC, 2022).Sumatran elephants at ETC and ERU have their own books to record all care and treatment.Data was obtained from elephant recording books at the elephant hospitals and interviews with doctors, paramedics, and mahouts.Data from elephant populations are presented separately based on breeding location (ETC and ERU), age and gender of elephants.Evaluation of calving interval and service period was based on calving records for elephants with a minimum parity of two, with records of the cow's name, mahout's name, birth year, age and calving date.The calving interval and service period of elephants in 1988-2021 in the WKNP were displayed descriptively.Calving Interval is the time interval between the birth of one calf and the subsequent birth of the next calf from the same cow (Muslimiah et al., 2023).The service period is the period of time between the date of calving and the date of successful conception (Temesgen et al., 2022).Additional data was also recorded regarding the feed and supplements given to the elephants.The collected data was then presented ________________________________________________________________________________ How to cite this article: Putri KJSA, Budiarto B, Ratnani H, Koesdarto S, Srianto P, Utomo B, Utama S, Yimer N. 2023.Reproduction record of captive Sumatera elephant (Elephas maximus sumatranus) at Way Kambas National Park, Indonesia.Ovozoa : J Anim Reprod.12: 149-157. 151 to doctors, paramedics, trainers, mahouts, and the Head of Section III, Kuala Penet, for validation. RESULTS The population of Sumatran elephants in ETC and ERU in 2021 totaled 64 individuals consisting of bulls, cows, male and female calves (Table 1).The oldest cow was Umri (born 1967), and the first elephant calving in WKNP in 1988 (Table 2).In WKNP, the first birth occurred in 1988, and until 2021, 25 cows were recorded calved to 47 calves, consisting of 13 cows with a parity of at least two (producing 35 calves) and 12 primiparous cows.Data from the 13 multiparous cows (Table 2) were used to calculate the calving interval and service period.Female elephants calve for the first time at the age of 11-24 years.The average age, parity, and age at first parturition (years) of elephants in ETC and ERU are presented in Table 3.The shortest calving interval for cows in WKNP was three years, and the longest was 12 years.The shortest service period for cows in the WKNP was one year, and the longest was 10 years.The average calving interval was 5.22 years (63 months), and the service period was 3.22 years (39 months), in cows aged 16-40 years.For details, the data is presented in days (Table 4). Some calving occurred in the 1988-2019 period.However, no calving occurred in 2020- 152 2021.Thirteen cows gave birth to 35 calves and 12 cows gave birth to calves for the first time.A total of 47 calves were born in WKNP.Some calves under the age of nine died due to infection with herpes virus (Elephant Endotheliotropic Herpes Virus) infection in 2014.The death of these calves also occurred because they were killed by their mothers who had just given birth for the first time.Currently, the preferred approach is to start treatment before clinical symptoms appear; otherwise, it is usually considered to exceed the optimal time period.Therefore, it is important to monitor all calves regularly.Therefore, further assistance is needed to increase the provision of mobile veterinary services for elephants, increase the quantity and capability of diagnostic laboratory facilities, and educate elephant owners, camp managers, elephant curators, veterinary assistants and mahouts about this disease, its monitoring and treatment methods.Therefore, this will assist in future initiatives aimed at tackling the increasing incidence of fatal acute hemorrhagic disease in Asian elephants and the potential long-term impacts on the reproduction and survival of this critically endangered species.The elephant calves were dominated by 27 male elephant calves compared to 14 female elephant calves (Figure 2).153 odorata Roxb), salam (Syzygium polyanthum), bungur (Lagerstroemia speciosa), palm (Livistonia rotundifolia) leaves, ketapang (Terminalia catappa), reeds (Imperata cylindrica), swamp grass (Laticilla cinerascens), mistletoe (Viscum album), putri malu (Mimosa pudica), pule (Alstonia scholaris) leaves, bamboo (Bambusa sp), nulangan grass (Eleusine indica) and other wild plants.There was no feed difference between ETC and ERU elephants; it was just that in the ERU area, there was more hay because it was close to settlements.Sumatran elephants (Elephas maximus sumatranus) also received corn (Zea mays) leaves, soybeans (Glycine max), green beans (Phaseolus vulgaris), rice (Oryza sativa), brown sugar, and bran as supplements. DISCUSSION Female elephants can reproduce from the age of eight to ten years (Nancy, 2019).Meanwhile, male elephants are ready to mate at the age of 12 to 15 years (Brown, 2014).Gestation period in Asian elephant was 623-729 days (Hildebrandt et al., 2007).Elephants can reproduce until they are 50 years old (Brown, 2014).The optimal age for female elephants to reproduce is 15-30 years (Thitaram, 2012).The captive Way Kambas elephant first calved at the age of 11-20 years.Elephant mating was carried out with the help of mahouts.Unfortunately, the matings were not recorded in the ETC.Meanwhile, in ERU, only a few mahouts collected data on matings.As written in the mating records, an ERU mahout stated that the elephant Riska mated once in 2019 and four times in 2020 after calving in 2017 (Personal communication, 2023).Foraging in the forest allows captive elephants to mate with wild ________________________________________________________________________________ How to cite this article: Putri KJSA, Budiarto B, Ratnani H, Koesdarto S, Srianto P, Utomo B, Utama S, Yimer N. 2023.Reproduction record of captive Sumatera elephant (Elephas maximus sumatranus) at Way Kambas National Park, Indonesia.Ovozoa : J Anim Reprod.12: 149-157. 154 elephants without their mahout knowing because the guard posts were far from the foraging area.According to ETC and ERU mahouts, captive elephants in Way Kambas mated more than twice in one estrus cycle.Bulls will mate with cows who were in estrus.Pheromones in the urine of cows attracted bulls to mate with cows (Hildebrandt, 2006). The calving interval for elephants ranged between four to six years, depending on whether the male elephant approached the female herd (Brown, 2014).Captive Way Kambas elephant calves are usually weaned at the age of one to two years.The shortest calving interval for captive elephants in the WKNP was 859 days or around two to three years for elephants aged 16-28 years and the longest was 4317 days or around 12 years for elephants aged 37 years (Table 4).This also confirmed Thitaram's (2012) report that the optimal age for female elephants to reproduce was 15-30 years.Two captive elephants at ETC, aged 30 and 37 years respectively, experienced extended calving intervals (7 and 12 years).The average calving interval for Sumatran elephants (Elephas maximus sumatranus) at ETC was 4.5 years (± 54 months) (Hapsari, 2003). Asian elephants (Elephas maximus) from several captivity in Thailand had a calving interval of 4.4 years (± 53 months) and an average abortion/stillbirth rate of 12.4% (Toin et al., 2020).This report was similar to a study on the reproductive performance of captive population of Asian elephant (Elephas maximus) in Sri Lanka (Pushpakumara et al., 2016).Pinnawela elephant orphanage in Sri Lanka is the largest breeding facility for Asian elephants (Elephas maximus) in the world with 35 cow elephants calving two, three, four, and five times, each with calving intervals of 5.0 years (± 60 months), 4.8 years (± 58 months), 7.9 years (± 95 months), and 5.8 years (± 70 months) respectively (Medawala et al., 2023).As the population of adult elephants in Way Kambas increased, the average calving interval for Way Kambas captive elephants ranged from 4.5 years (± 54 months) to 5.22 years (± 63 months).The calving interval was obtained from the increase in the number of female elephants, the age of the cows, undetected estrus, unscheduled matings, mating after the fertile period has passed, and poor semen quality.The optimum calving interval for elephant was in the range of four to a years (Toin et al., 2020).In this study, young elephants had shorter calving intervals and service periods than older elephants. Female elephants only return to estrus after a lactation anestrus period of eight to 12 months (Brown et al., 2010).Way Kambas' captive elephants have an average service period of 3.22 years (approximately 39 months).The shortest service period for elephant cow in the WKNP was one year which occurred in cows aged 16-28 years, and the longest was 10 years in a 37 years old cow.Puberty that is too early caused premature aging too. Continuous ovarian cycling in unmated cows had adverse and cumulative effects on the reproductive health in captive elephants (Hildebrandt, 2006).Pheromones are intraspecific interacting substances used by female elephants to attract the attention of male elephants.Pheromones were found in urine at the end of the luteal phase and increased gradually during the follicular phase until they reached peak concentrations just before ovulation (Thitaram, 2009).The average elephant estrus cycle is four months, three months for the luteal phase and one month for the follicular phase.The luteal phase was the first point at which the progesterone concentration increased above 0.3 ng/mL and remained at 1-2 ng/mL for at least two weeks (Thongtip et al., 2009).A double surge of LH occured during the follicular phase.The first LH surge was not followed by ovulation.Three weeks later, ovulation occured 24 hours after the second LH surge (Kaewmanee et al., 2011).In Way Kambas, pregnant captive elephants mated at night.Although female elephants could detect pheromones and were willing to mate with males, pregnancy would not occur if the female elephants did not ovulate.Therefore, monitoring hormones was very helpful to determine the right time for mating.Serum progestagen concentration reached 2 ng/ml within 2 months after AI (June 2005), 155 continued to increase to 2-5 ng/ml and persisted for more than 20 weeks, indicating pregnancy.Serum progestagen concentrations started to decline to baseline one month before calving (Brown et al., 2004;Thongtip et al., 2009 ). Ovarian activity depended on the hypothalamic-pituitary-ovary axis, with feedback controlling the role of inhibin in FSH secretion by the pituitary.FSH concentrations were highest during week 2 and lowest during week 3 after that increased significantly from weeks 3 to 5 and remained stable until week 16 (Kaewmanee et al., 2011). The elephant training center was established in 1985, followed by ERU in 2014, contributing to a population of 47 Sumatran elephants.The EH had never found a captive WKNP elephant with reproductive problems.Female elephants did not receive special treatment in Way Kambas National Park.2012-2017 was the year when the most Sumatran elephants were born (14 from 10 cow elephants).Elephant births was dominated by male calves, resulting in a larger population of male elephants in the ETC and ERU areas.The higher number of male elephants compared to female elephants certainly had an impact on the reproductive efficiency of elephants due to the minimal number of females in the area.Male elephants are useful when herding wild elephants back to the forest (Salsabila et al., 2017).Not all Sumatran elephants (Elephas maximus sumatranus) in the National Park were captive, and their reproduction had been recorded.The EH only has data on individual elephants trained at ETC and ERU.After the elephants are cared for and trained at ETC, the adult elephants would be transferred to ERU to handle human-elephant conflicts.Meanwhile, other trained elephants would be staying at ETC or sent to conservation institutions that need Sumatran elephants. Captive elephants in the WKNP's are herded into the forest to find food in the morning and evening, and in roam free cage in the afternoon.Elephants eat wild plants from the forest as a source of carbohydrates, fats, proteins, vitamins, and minerals (Resphaty et al., 2015;Dhairykar and Singh, 2020).Appropriate food supply and temperature would support increased reproductive efficiency (Ong et al., 2023).Captive elephants have comfortable shelter and enough food so they are not too affected by the weather.Meanwhile, wild elephants usually entered the rutting season during the rainy season.The rutting season, which referred to the mating season, usually coincided with periods of abundant rainfall.This was because females entered their fertile phase in the latter half of the rainy season. Male elephants exhibited increased aggression and sexual activity during the mating season due to excessive production of the 'musth' hormone.Calves are born 22 months after mating, coinciding with the start of the rainy season which provided abundant food.Therefore, mother elephants have abundant food availability and are able to produce milk to support the development of their offspring.Therefore, newborn Asian elephants usually weighed around 100kg and grew rapidly in their early years (Dierenfeld et al., 2020).Maintaining the Asian elephant population in captivity could be done by increasing birth rates, improving the welfare of rural residents, and ensuring the health, nutrition, environment, and welfare of elephants.This will reduce the death rate for the long-term survival of the population (Pla-Ard et al., 2023). CONCLUSION The average calving interval and service period for captive elephants in ETC and ERU of Way Kambas National Park were similar and in the normal range.Young female elephants showed better reproductive efficiency than older elephants.The use of advanced reproductive technologies, such as monitoring ovulation using ultrasonography and carrying out artificial insemination, is expected to increase reproductive efficiency. Figure 2 Figure 2 Distribution of elephant calving every six years between 1988-2021. Table 1 Number of captive elephants at the Elephant Training Center and Elephant Response Unit in Way Kambas National Park Table 2 Data of WKNP Sumatran cow elephants with a minimum parity of two
2023-12-13T16:02:44.557Z
2022-12-08T00:00:00.000
{ "year": 2022, "sha1": "ee99d2cae25ace5b620193a475984d4b4e9d87ce", "oa_license": "CCBYSA", "oa_url": "https://e-journal.unair.ac.id/OVZ/article/download/48610/27255", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9ddc4860e689abcbf255bdd4bb9260deeb281bf0", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [] }
232421685
pes2o/s2orc
v3-fos-license
3DeeCellTracker, a deep learning-based pipeline for segmenting and tracking cells in 3D time lapse images Despite recent improvements in microscope technologies, segmenting and tracking cells in three-dimensional time-lapse images (3D + T images) to extract their dynamic positions and activities remains a considerable bottleneck in the field. We developed a deep learning-based software pipeline, 3DeeCellTracker, by integrating multiple existing and new techniques including deep learning for tracking. With only one volume of training data, one initial correction, and a few parameter changes, 3DeeCellTracker successfully segmented and tracked ~100 cells in both semi-immobilized and ‘straightened’ freely moving worm's brain, in a naturally beating zebrafish heart, and ~1000 cells in a 3D cultured tumor spheroid. While these datasets were imaged with highly divergent optical systems, our method tracked 90–100% of the cells in most cases, which is comparable or superior to previous results. These results suggest that 3DeeCellTracker could pave the way for revealing dynamic cell activities in image datasets that have been difficult to analyze. Introduction Imaging cells to reveal the dynamic activities of life has become considerably more feasible because of the remarkable developments in microscope hardware in recent years (Frigault et al., 2009;Ahrens and Engert, 2015;Bouchard et al., 2015;Weisenburger and Vaziri, 2018). In addition, multiple software platforms for processing 2D/3D still images and 2D + T images have been developed (Eliceiri et al., 2012). However, processing cells in 3D + T images has remained difficult, especially when cells cannot be clearly separated and/or their movements are relatively large, such as cells in deforming organs. For processing objects in 3D + T images, the following two steps are required: (1) segmentation: segmenting the regions of interest in each 3D image into individual objects (Figure 1-figure supplement 1, left) and (2) tracking: linking an object in a particular volume to the same object in the temporally adjacent volume (Figure 1-figure supplement 1, right). For segmenting and tracking (hereafter collectively called 'tracking') cells of deforming organs in 3D + T images, programs optimized for processing images in particular conditions have been developed (Schrö del et al., 2013;Toyoshima et al., 2016;Venkatachalam et al., 2016;Nguyen et al., 2017). However, these methods cannot be used under conditions other than those for which they were designed, at least without a loss in processing efficiency. In other words, 3D + T images, especially those obtained under challenging conditions, can be efficiently processed only when customized software is developed specifically for those images. One reason for this is that many parameters must be optimized to achieve good results-For example, even in 2D image processing, changes in lighting could require the re-optimization of parameters for segmentation and tracking (Egnor and Branson, 2016). One way to solve this problem is to optimize the parameters automatically using machine learning, especially deep learning. Deep learning methods use an artificial neural network with multiple layers, that is, a deep network, to process complex data and automatically optimize a number of parameters from training data, which allows users to easily apply a single method to images obtained under different conditions. In addition to this flexibility, deep learning methods have outperformed conventional methods in some image processing tasks such as image classification (LeCun et al., 2015;Krizhevsky et al., 2012). Nevertheless, deep learning methods are used mostly for segmentation and/or object detection, but not for tracking because of the difficulty in preparing training data (Moen et al., 2019): Correctly tracking a number of objects manually to obtain training data, especially from 3D + T images recorded over long period of time, is extremely challenging. In addition, designing a multiple-step pipeline for segmenting cells and tracking their positions to work under various conditions has been difficult. In this study, we developed 3DeeCellTracker, a new pipeline utilizing deep learning techniques in segmentation and, for the first time to our knowledge, in tracking of cells in 3D + T images. We solved the problem of training data preparation for tracking by creating a synthetic dataset (see below). We also designed a multiple-step pipeline to achieve accurate tracking. 3DeeCellTracker was implemented on a desktop PC to efficiently and flexibly track cells over hundreds of volumes of 3D + T images recorded under highly divergent optical or imaging conditions. Using only one volume for training and one initial correction, 3DeeCellTracker efficiently tracked 100-150 neurons in the brains of semi-immobilized or 'straightened' (special pre-processing step based on the worm's posture) freely moving Caenorhabditis elegans roundworms from four image datasets, obtained using spinning disk confocal systems in three different laboratories. With a few modifications, 3Dee-CellTracker also tracked~100 cells in the naturally beating heart of a zebrafish larva monitored using swept confocally aligned planar excitation (SCAPE) microscopy, a novel oblique light sheet eLife digest Microscopes have been used to decrypt the tiny details of life since the 17th century. Now, the advent of 3D microscopy allows scientists to build up detailed pictures of living cells and tissues. In that effort, automation is becoming increasingly important so that scientists can analyze the resulting images and understand how bodies grow, heal and respond to changes such as drug therapies. In particular, algorithms can help to spot cells in the picture (called cell segmentation), and then to follow these cells over time across multiple images (known as cell tracking). However, performing these analyses on 3D images over a given period has been quite challenging. In addition, the algorithms that have already been created are often not user-friendly, and they can only be applied to a specific dataset gathered through a particular scientific method. As a response, Wen et al. developed a new program called 3DeeCellTracker, which runs on a desktop computer and uses a type of artificial intelligence known as deep learning to produce consistent results. Crucially, 3DeeCellTracker can be used to analyze various types of images taken using different types of cutting-edge microscope systems. And indeed, the algorithm was then harnessed to track the activity of nerve cells in moving microscopic worms, of beating heart cells in a young small fish, and of cancer cells grown in the lab. This versatile tool can now be used across biology, medical research and drug development to help monitor cell activities. microscope system for very rapid 3D + T imaging (Bouchard et al., 2015;Voleti et al., 2019). Furthermore, 3DeeCellTracker tracked~900 cells in a cultured 3D tumor spheroid, which were imaged with a two photon microscope system. Our pipeline provided robust tracking results from the above-mentioned real datasets as well as from degenerated datasets, which differed in terms of signal-to-noise ratios, cell movements, and resolutions along the z-axis. Notably, 3DeeCellTracker's performance was better in terms of the tracking results than those from recently developed 2D/3D tracking software running on a desktop PC, and comparable to software running on a high-performance computing cluster (Toyoshima et al., 2016;Nguyen et al., 2017;Bannon et al., 2018). Furthermore, by using the positional information of the tracked cells, we extracted dynamics of the cells: the worm's neurons exhibited complex activity patterns in the brain, the zebrafish heart cells exhibited activities synchronized with heart chamber movement, and the tumor spheroid cells exhibited spontaneous activities without stimulation. These results indicate that, 3DeeCellTracker is a robust and flexible tool for tracking cell movements in 3D + T images, and can potentially enable the analysis of cellular dynamics that were previously difficult to investigate. 3D + T images are taken as a series of 2D images at different z levels (step 1) and are preprocessed and segmented into discrete cell regions (step 2). The first volume (t = 1) of the segmentation is manually corrected (step 3). In the following volumes (t ! 2), by applying the Overview We developed a new pipeline, 3DeeCellTracker, which integrates novel and existing techniques ( Figure 1A). After preprocessing (see Materials and methods), it performs automatic segmentation of cells in all 3D + T images using 3D U-Net for classifying individual voxels into cell or non-cell categories (Ronneberger et al., 2015;Ç iç ek et al., 2016). Continuous regions of 'cell' voxels are separated into individual cell regions using the watershed method (Beucher and Meyer, 1993), and then numbered. The segmented cells are manually corrected only in the first volume of 3D images. In the following 3D tracking step, we considerably increased the efficiency by introducing a deep learning technique, feedforward network (FFN), to predict cell positions based on spatial patterns of cells maintained between previous and current images. The predicted positions are corrected with a nonrigid point set registration method called PR-GLS (Ma et al., 2016) and by our custom method to obtain precise cell locations, which are critical for extracting the accurate dynamics of cellular signals. The 3D U-Net and the FFN are pre-trained using manually confirmed data or synthetic data ( Figure 1B,C) from a single 3D image volume. The tracking results were visually inspected by comparing the locations of tracked cells with the corresponding raw images (Figure 1-figure supplement 2). The keys to our method are the use of simulation to produce large amounts of training data for the FFN and the carefully designed post-processing methods for the FFN, which result in the flexible and robust tracking of moving cells in very different 3D + T datasets. In the following two sections, we describe details of the segmentation and tracking method. Segmentation For segmentation, cell-like regions in an image should be segmented into individual cells which may differ in intensities, sizes, shapes, and textures. Cell segmentation in 2D images using deep networks has been previously reported (Ronneberger et al., 2015;Van Valen et al., 2016;Bannon et al., 2018). In this study, we utilized a deep network called 3D U-Net to segment cells in 3D images to predict the class labels (cell or non-cell) of individual voxels based on information contained in neighboring voxels (Figure 2A and Ronneberger et al., 2015;Ç iç ek et al., 2016). The U-Net can generate precise segmentation under diverse imaging conditions and can be trained with very few annotated images (e.g. only one 3D image volume in this study; see Figure 1B). The pre-processed images of the first volume are used to train the 3D U-Net ( Figure 1B), and the trained U-net is then used for the segmentation of cells in all the following volumes. Once trained on one dataset, the 3D U-Net can be directly reused for different datasets obtained under similar optical conditions. The cell-like regions detected using 3D U-Net are grouped and separated into individual cells using the watershed method (see Figure 2A and Materials and methods). Tracking For tracking cells, two major strategies can be considered. One strategy is to utilize the information contained in each cell region, for example, local peaks or distributions of intensities (Toyoshima et al., 2016). Using this information, a program can update the position of each cell by searching for its new position in nearby regions in the next volume. However, the obvious drawback of this strategy is that cells can be mistracked if their movements are comparable to or greater than the distances between adjacent cells (see below). Another strategy is to represent cells by their center points, ignoring the information in each cell region and treating the cells as a set of points. In this strategy, the new positions of a cell set in the next volume can be estimated based on the patterns of their spatial distributions, and cells with large movements can also be tracked based on the global trend of the movements. Previously reported methods utilizing this strategy, called point set registration (Ma et al., 2016;Jian and Vemuri, 2005;Myronenko et al., 2006), used the spatial distributions and coherency of movements to track points within artificial datasets. However, the spatial patterns are conventionally characterized by features designed by experts, an approach that did not work well for the cell images used in this study (refer to the results below concerning Fast Point Feature Histograms [FPFH; Rusu et al., 2009] versus FFN). Another problem with this strategy is that the estimated positions are not always Figure 1, while the numbers 2-1 and 2-2 indicate cell-like voxels detected by 3D U-net and individual cells segmented by watershed, respectively. One volume of a worm neuron dataset is used as an example. (B) Left: Definition of the positions in two point sets corresponding to two volumes. Right: Structure of the feedforward network for calculating the similarity score between two points in two volumes. The numbers on each layer indicate the Figure 2 continued on next page accurate because the information contained in each cell region is ignored (see the results below concerning our method for accurate correction). In order to obtain more robust and accurate tracking results, we integrated the spatial pattern (i.e. point sets) and local cell region strategies and used a deep learning technique, FFN, for the former. We used FFN to match temporally adjacent cells based on the distance pattern between each cell and its 20 surrounding cells ( Figure 2B). Using the pattern obtained by the FFN, all the cells at t 1 are compared with all the cells at t 2 , and the most similar ones are regarded as the same cells at t 1 and t 2 , a process we call initial matching. Although deep network techniques are expected to produce superior cell tracking results, the method had not been used previously because it requires large amounts of training data. These data are difficult to prepare manually, especially for 3D + T images, as validating and correcting large numbers of cell positions over time by switching multiple layers along the z axis is virtually impossible. To solve this difficulty in preparing training data for FFN, we generated >500,000,000 synthetic training data points by simulating cell movements (see Figure 2-figure supplement 2 and Materials and methods). In the pipeline, the center points of cells are extracted from the cell regions segmented by 3D U-Net and the watershed method, and a pre-trained FFN ( Figures 1C and 2B) is applied to the cell points to generate the initial matching from volumes t to t+1 ( Figure 2C, panels 2-2 and 4-1). To improve the initial matching, a non-rigid point set registration method (PR-GLS) (Ma et al., 2016) is used to generate a coherent transformation, that is, neighboring cells should have similar movements ( Figure 2C, panel 4-2). Originally, PR-GLS was used in conjunction with the FPFH method to predict cell positions (Ma et al., 2016); however, our FFN generates more accurate initial matchings than FPFH (Figure 2-figure supplement 3A and B), and our combination of FFN + PR-GLS generates more accurate predictions of cell positions than the FPFH + PR-GLS or the classic affine alignment (Myronenko et al., 2006) does (Figure 2-figure supplement 3C and D). Nevertheless, our method sometimes generates subtle errors because some cells may show slightly different movements from those of their neighboring cells, which can accumulate over time. To overcome these errors, the estimated positions are accurately corrected to compensate for their differences by utilizing information from local cell regions contained in the 3D U-Net output ( Figure Tracking neurons in the deforming worm's whole brain To test the performance of 3DeeCellTracker, we analyzed 3D + T images of neurons in the deforming brain of C. elegans. C. elegans has been used as a model for imaging all neuronal activities in the brain ('whole brain imaging') owing to its small brain (~40 mm 3 in an adult) in a transparent body, the complete description of all connections of 302 neurons, feasibility in the use of genetically encoded calcium indicators (GECI), and the capability of perception, memory, and decision-making in its small brain (de Bono and Maricq, 2005). Whole brain imaging of C. elegans has been reported from several laboratories, and the most popular optical system currently is the spinning disk confocal system, for which each laboratory has developed their own tracking software (Schrö del et al., 2013;Toyoshima et al., 2016;Venkatachalam et al., 2016;Nguyen et al., 2017). We established our own spinning disk confocal system for whole-brain imaging, OSB-3D (see Materials and methods for details), and obtained 3D + T images of whole brain activity from a strain established in our laboratory and from the one used in a previous Nguyen et al., 2016 (datasets worm #1 and #2, respectively). In addition, we analyzed the whole-brain images published previously using a different optical system (Toyoshima et al., 2016;dataset worm #3). In all datasets, the worms were semi-immobilized either by anesthetization (worm #1 and #2) or by constriction (worm #3), and red fluorescent proteins and GECI were used to monitor cell positions and cell activities, respectively. After segmentation, we manually confirmed 164 neurons in the head in the first volume of 3D + T images, in which the distribution of cell signals was mostly, but not completely, separated from the background signals (dataset worm #1a; see Figure 3A-C). While the worm was anesthetized, its head deformed and the cells moved during imaging. Our method tracked all the neurons in all 171 volumes except for those that moved out of the field of view of the camera ( Figure 3D-F, Figure 3video 1). To evaluate the difficulty in tracking cells with large movements, we calculated a score for the relative movements (RM), which is the movement of a cell divided by the distance from that cell to its closest neighboring cell. When RM is small, searching for the closest cell is the simplest method to find the new position of a given cell in the next volume. However, such a simple approach may lead to tracking errors when RM !0.5 (Figure 4 and Materials and methods). Although most of the cell RM values were 0.5 in the worm #1 dataset ( Figure 3B, bottom), other datasets had cells with RM !0.5; nevertheless, these were also successfully tracked by our program (see below). We also tested worm #1b dataset, obtained from the same strain with worm #1a ( The same method was applied to 3D + T images obtained using the same OSB-3D system in our laboratory but from the neurons of a worm strain used in a previous whole brain imaging study (Nguyen et al., 2016; AML14 strain, dataset worm #2; Figure 3-figure supplement 2A). Our method again achieved a 100% accuracy of tracking and extracted calcium signals from all 101 neurons even though this dataset differs considerably from the worm #1 dataset in terms of nuclear marker intensity and movement patterns ( . It should be noted that more neurons were detected using our method than using the original method (approximately 90 neurons or less) (Nguyen et al., 2016). We also applied our method to a publicly available dataset, which was obtained using a different strain, a different microscopy setup, and different imaging conditions from worm #1 and #2 datasets (worm #3, Figure 3-figure supplement 3A; Toyoshima et al., 2016). In this case, Our result was comparable to that of the original report, in which 171 out of 198 neurons were tracked without error (Toyoshima et al., 2016). These results indicate that our method can flexibly process 3D + T images of neurons in a semi-immobilized worm's brain obtained under different conditions. Tracking neurons in freely moving worm's brains To reveal the relationships between neuronal activities and animal behavior, Nguyen et al., developed a method to track neurons in a freely moving worm, in which the deformation of the brain and the movements of neurons are considerably larger than those occurring in semi-immobilized worms (Nguyen et al., 2016;Nguyen et al., 2017). After 'straightening' the worm and segmentation of its neurons, they made a set of reference volumes to which each neuron was matched to assign its identity. Although this method is powerful, it requires a high-performance scientific computing cluster running on up to 200 cores simultaneously. We therefore tested whether the 3DeeCellTracker implemented on a desktop PC could process such a challenging dataset (worm #4). Here RM = (movement a cell traveled per volume)/(distance to the closest neighboring cell). When RM !0.5, cell A at t = 2 will be incorrectly assigned to cell B at t = 1 if we simply search for its closest cell instead of considering the spatial relationship between cells A and B. When RM = 0.5, cell assignment is also impossible. Please also see Materials and methods for details. (B) Examples of large cell movements with RM » 1. Also see Figure 10 and Table 3. With a few modifications (see Materials and methods), we used 3DeeCellTracker to segment and track 113 cells in the initial 500 volumes from the worm #4 dataset. Here we analyzed the images preprocessed by the straightening method (Nguyen et al., 2017), which is necessary for our current method. Even after the straightening, the cell movements were quite large, with many comparable to the distances between cells, that is, RM !0.5 ( Figure 6A and B). We visually inspected the tracking of 90 cells while the remaining 23 were not checked because of difficulties arising from weak intensities/photobleaching and/or the cells being in dense regions. Note that 70 cells were inspected in the original report (Nguyen et al., 2017). We found that 66 out of 90 cells (73%) were correctly tracked without errors ( Figure 6D-F, single mode), which is acceptable but not ideal. To further improve our method for the larger cell movements, we developed a new mode, in which multiple predictions of cell positions are made from different previous time points, and the final prediction is taken as the average of these predictions. We call this 'ensemble mode' and the previous method 'single mode', wherein the predictions of cell positions at time t are derived from the results at t-1 ( Figure 6C). When applying the ensemble mode, we again found 66 cells correctly tracked without errors, and that the remaining 24 cells were correctly tracked in most volumes (94%-98.2%). This was a substantial improvement over the single mode, in which errors at t are maintained until the end of tracking ( Figure 6D-F, ensemble mode; Figure 6-video 1). In total, the ensemble mode correctly tracked 44,905 out of 45,000 cell movements (99.8%), a result at least comparable to that in the original report (Nguyen et al., 2017). From the trajectories of two example cells, we found that large movements, including ones along the z-axis (across~30 layers), occurred frequently ( Figure 6G), demonstrating the excellent performance of our method on a desktop PC in this challenging dataset containing considerably large scale movements. However, because the ensemble mode requires longer times for the tracking than the single mode (10 versus 2 min/volume, respectively), we used the single mode in the following analyses. Table 1. Comparison between the complexities of our method and a previous method. For each method, the procedures are listed along with the number of parameters (in bold) to be manually determined by researcher (see the guide in our GitHub repository). Our method requires manual determination of less than half of the parameters required by the previous method. Noise levels are more variable and dependent on image quality and the pre-trained 3D U-net. Other parameters are less variable and dependent on the sizes of cells or the coherence levels of cell movements, thus do not need to be intensively explored for optimization when imaging conditions are fixed. See the user-guide in GitHub for how to set these parameters. 3D U-net structure corresponds to Coherence level: l Maximum iteration Worm #1a x-y: 0. Tracking cells in beating zebrafish heart images obtained using the SCAPE system To test the general applicability of the 3DeeCellTracker, we applied our method to the 3D + T images of a naturally beating heart in a zebrafish larva obtained at 100 volumes per second using a substantially different optical system, the SCAPE 2.0 system (Voleti et al., 2019; Figure 7A). The speed of image acquisition of this system is extraordinary relative to that of the spinning confocal system, which generally allows for the recording of 10 volumes/s. This dataset includes both large cells with stronger intensities that are easy to segment and small cells with weaker intensities that are difficult to segment ( Figure 7B, top and middle). The photobleaching in the dataset made it challenging to detect and track the weak intensity cells in the later part of the imaging, because of the substantial overlap between the small cell signals and background signals ( Figure 7B; Figure 7figure supplement 1). This weak intensity is unavoidable because of the extremely quick scanning rate of the system (10,568 frames per second). In addition, the rapid beating of the heart caused relatively large movements of all the cells in the x-y-z directions ( Figure 7B, bottom; Figure 7-video 1; see below), making cell tracking more challenging than that for worm #1-3, which predominantly moved in the x-y plane. After segmentation, we manually confirmed 98 cells in the first volume and tracked them automatically ( Figure 7C and D; Figure 7-video 2). We found that among the 30 larger cells (size: 157-836 voxels) with higher intensities, 26 of them (87%) were correctly tracked in all 1000 volumes. Even though the smaller cells with lower intensities were more difficult to track, when including them, we still correctly tracked 66 out of 98 cells (67%) ( Figure 7E and F). The tracked movements of two example cells showed regular oscillations in 3D space (x, y, and z axes; Figure 7G), consistent with the regular beating movement of the heart. It should be noted that we did not optimize the pipeline procedures for the zebrafish data, except for a few parameter changes ( Table 2). Our results indicate that 3DeeCellTracker is also capable of analyzing images with rapid and dynamic movements in 3D space, obtained from a substantially different optical system. We then used the tracking results to answer a biological question: What is the relationship between the intracellular calcium signals and the beating cycles of the heart? We extracted the calcium dynamics of the correctly tracked 66 heart cells, which co-express GECI as in the worm datasets, and analyzed the phases of activities in these cells relative to the beating cycles of the ventricular and atrial regions, respectively. As shown in Figure 7H and I, the calcium signals were largely synchronized with the beating cycles. Although a portion of the heart cells (32/98 total) was not correctly tracked and therefore not analyzed, this result is still remarkable because the tracked cells show the relationships between calcium dynamics and the natural heartbeats in vivo. Observation of this relationship was only made available by the developments of a state-of-the-art microscope system that can monitor 100 volumes per second and of our software pipeline that can correctly track large portions of the corresponding cell movements in 3D space. Tracking~900 cells in a 3D tumor spheroid imaged with a two-photon microscope We also tested our method on a dataset more related to biomedical application, namely a dataset of~900 cells in a 3D multicellular tumor spheroid (MCTS) imaged with a two-photon microscope. 3D MCTS is increasingly being used for drug screening because of its similarity to tumor cells in vivo (Riedl et al., 2017). Therefore, the measurement of individual cell activities in 3D MCTS has become necessary, although tracking the movements of large numbers of cells in 3D + T images of 3D MCTS has presented a considerable challenge. We obtained 3D + T images of a 3D MCTS cells expressing FRET type ERK sensor, EKAREV-NSL (Komatsu et al., 2011) using a two-photon microscope (see Materials and methods). This dataset shows normal distributions of intensities and movements but includes a much larger number of cells than either the worm brain or the zebrafish heart dataset ( Figure 8A and B; Figure 8-video 1). Furthermore, cell division and death occurred during the imaging. Our method segmented and tracked 901 cells, of which we visually inspected the tracking results of 894 cells (the remaining seven cells were found to have segmentation errors in volume #1). We excluded the cells that experienced cell death or cell division after such events occurred, and found that 869 out of 894 cells (97%) were correctly tracked ( Figure 8C-E, Figure 8-video 2). Using the tracking results, we extracted the ERK activity from the FRET signal. In three representative cells with cross-layer movements, we confirmed that the extracted signals correctly reflected intensity changes in the cells ( Figure 8F). We also found that the spheroid as a whole moved downwards, although each cell moved in a different direction ( Figure 8G). Evaluation of the method under challenging conditions using degenerated datasets In the assessments described in the preceding sections, we successfully analyzed multiple types of datasets that differ in terms of image resolution, signal-to-noise ratio, and types of cell movement etc. We then systematically evaluated the performance of our method (single mode) under a variety of conditions using a series of degenerated datasets obtained by modifying the worm #3 dataset. For a fair comparison, we used the same pre-trained 3D U-Net and the same manually corrected segmentation at t = 1 used on the original worm #3 dataset. A general difficulty in segmentation arises from images with low signal-to-noise ratios. Excessive noise can obscure the real cell signal, leading to incorrect segmentation and ultimately incorrect tracking. We tracked cells in three degenerated datasets with different levels of Poisson noise added to the original images: sd = 60, sd = 100, and sd = 140 ( Figure 9A). The noise level in the original images in the non-cell regions was sd = 4.05 and the median intensity was 411, whereas the median intensity of cell regions was 567 with a 95% confidence interval of 430-934, indicating a tiny overlap between non-cell and cell regions (Figure 3-figure supplement 3B). In the sd = 60 condition, the intensities of non-cell and cell regions overlapped to a greater extent ( Figure 9B), and image quality appeared poorer than that of the original image ( Figure 9A). Nevertheless, our method achieved a low error rate (6/175 = 3%; Figure 9C-E). Even in the sd = 140 condition, in which the intensities overlapped extensively ( Figure 9B) and the image quality was quite poor ( Figure 9A), our method achieved an acceptable error rate (16/175 = 9%, that is, 91% correct; Figure 9C-E). Note that the tracking error rate was much lower than the segmentation error rate (16/175 vs 57/175, respectively; The sizes of the ventricle and atrium cannot be directly measured, so we instead estimated them as size ¼ sd x ð Þ Â sd y ð Þ Â sd z ð Þ, where sd is standard deviation (more robust than range of x, y, z), and x, y, z are coordinates of correctly tracked cells in the ventricle or in the atrium. To improve visibility, these sizes were normalized by size normalized ð Þ ¼ size=sd size ð Þ. Calcium signals (GCaMP) were also normalized by Ca 2þ normalized ð Þ ¼ Ca 2þ =mean Ca 2þ ð Þ. (I) Phase differences between intracellular calcium dynamics and the reciprocal of segment sizes in ventricle and atrium were estimated. Here we used reciprocal of segment sizes because we observed the anti-synchronization relationships in (H). The phase differences were estimated using cross-correlation as a lag with the largest correlation. Most cells showed similar phase differences (mean = À0.110 p; standard deviation = 0.106 p). All scale bars, 40 mm. The online version of this article includes the following video and figure supplement(s) for figure 7: regions and background in the degenerated datasets were much more severe than the overlap in the real datasets ( Figure 9B and panel B in Figures 3, 6, 7 and 8), these results suggest that our method can robustly track cells in 3D + T images with severe noise. Another difficulty comes from large displacements of cells between volumes during tracking. We enhanced this effect by removing intermediate volumes in the worm's 3D + T images, and then tested the three datasets with 1/2, 1/3, and 1/5 of the volume of the original dataset ( Figure 9F). As expected, when more volumes were removed, the movements of cells and the number of tracking errors increased ( Figure 9G-L). Nevertheless, the error rate in the 1/3 vol condition was acceptable (14/175 = 8%, i.e. 92% correct), while the error rate in the 1/5 vol condition was relatively high (25/ 175 = 14%, i.e. 86% correct). We then tested whether our method can track cell movements along the z-axis, in which the resolution is much lower than that in the x-y plane. In such conditions, 3D + T tracking is more challenging along the z-axis than in the x-y 2D plane. In the deforming worm and the tumor spheroid datasets, the resolution along the z-axis was approximately 1/10-1/5 of that in the x-y plane (panel A in Figure 3, Figure 3-figure supplements 1, 2 and 3, and Figure 8). We confirmed that 5, 16, 25, and 668 cells in worm #1a, #1b, and #3, and tumor spheroid datasets, respectively, showed crosslayer movements along the z-axis. Still, our method correctly tracked all of those cells. For example, while two cells in the worm #3 dataset exhibited multiple cross-layer movements (Figure 9-figure supplement 2), they were correctly tracked until the end the sequence. Furthermore, we also degenerated the zebrafish heart data to test whether our method can track dynamic 3D cell movements with unequal resolutions. Even when the zebrafish image resolution in the z-axis was reduced by sum binning with a shrink factor = 2, our method was able to correctly track a majority of the cells (80% of the 30 larger cells and 53% of all 98 cells; Table 3). Together, these results indicate that our method can correctly track cells in various conditions including severe noise, large movements and unequal resolution. Challenging movement and its relationship with tracking error rate To evaluate the tracking performance of our method on datasets with large movements, we summarized the RM values, which are listed in Table 3 and Figure 10 (see also Figure 4). Although many cell movements in the worm #3 and the zebrafish datasets had RM !0.5, most of the worm neurons and the larger cells in the zebrafish heart were correctly tracked by the single mode. In addition, our program with the ensemble mode achieved 99.8% tracking of the neurons of a 'straightened' freely moving worm while many cell movements in this dataset had RM !1. These results indicate that our method is capable of analyzing images with challenging displacements. To investigate the relationships between the cell movements and tracking error rate, we drew a scatter plot of the averaged RM and corresponding error rate for each volume (Figure 10-figure supplement 1). The results indicated by the regression lines suggest positive correlations between the RM and error rate, although the correlation trends appear to differ by dataset, implying that movement is not the only factor affecting the error rate. We also drew a scatter plot and regression line for the worm #4 dataset tracked with the ensemble mode ( Figure 10-figure supplement 2). The result suggests that the error rate is not correlated with movement, perhaps because in ensemble mode the cell positions are predicted from multiple different volumes, and therefore movement from a previous volume is not closely connected with the error rate. Comparison of the tracking accuracies of our and previous methods To further assess the capabilities of our method, we compared the tracking performance of our method with that of two other state-of-the-art methods for cell tracking. The first was DeepCell 2.0, a newer version of DeepCell, which is a pioneer in segmenting and tracking cells in 2D + T images using deep learning (Bannon et al., 2018). Unlike our method, DeepCell 2.0 has been primarily tested on images that include cell divisions, birth and death, but not on images of deforming organs. The second was the software developed by Toyoshima et al., which does not use a deep learning technique, but has achieved a higher accuracy in both segmenting and tracking of worm whole brain datasets than any other previous methods (Toyoshima et al., 2016). On our worm brain datasets, DeepCell 2.0 tracked~10% of cells properly ( Figure 11A Table 4); this is probably because the tracking algorithm of DeepCell 2.0 is optimized for the movements associated with cell divisions, but not for quickly deforming organs. Toyoshima's method tracked~90% of the original worm #3 neurons but only~10% of the degenerated 'worm #3, 1/5 sampling' ( Figure 11B, Figure 11-figure supplement 1B, and Table 4). When tested on the zebrafish dataset, Toyoshima's method was able to detect only 76 cells, and missed some weak-intensity cells, performing worse than our method (detected 98 cells). Of the detected 76 cells, 21 were incorrectly tracked from the first volume, probably because their method automatically re-fits the cell shape using a Gaussian distribution after segmentation, which can lead to a failure when fitting weak-intensity cells. Their tracking accuracy was also lower than ours (Table 4). In addition, our method was found comparable to or more efficient in terms of runtime than DeepCell 2.0 and Toyoshima's methods ( Table 5). These results suggest that our method is more capable of tracking cells in 3D + T images than previous methods. Discussion Tracking biological objects in 3D + T images has proven to be difficult, and individual laboratories still frequently need to develop their own software to extract important features from the images obtained using different optical systems and/or imaging conditions. Moreover, even when identical optical systems are used, the optimization of many parameters is often required for different datasets. To solve these problems, we have developed a deep learning-based pipeline, 3DeeCellTracker, and demonstrated that it can be flexibly applied to divergent datasets obtained under varying conditions and/or different qualities. We analyzed multiple image series of worms, zebrafish, and tumor spheroids, which differed in terms of nuclear marker, intensity level, noise level, numbers of cells per image, image resolutions and sizes, imaging rates, and cell speed. Notably, we showed that our method successfully tracked cells in these datasets under challenging conditions, such as large movements (see Figure 10, Table 3 and Materials and methods), cross-layer movements ( Figures 6G and 7G, and Table 4), while the other methods are likely more suited for other conditions. Furthermore, our method is comparable to or more efficient in runtime than previous methods ( Table 5 and Materials and methods). Running in ensemble mode on a desktop PC, our method tracked the neurons of a 'straightened' freely moving worm with high accuracy, which required a computing cluster with up to 200 cores in the previous study (Nguyen et al., 2017). We consider that the high accuracy and robustness of our method is based on its use of FFN, a deep learning technique, together with its post-processing methods for tracking (Figure 2-figure supplement 3). As mentioned above, although deep learning techniques have been predicted to exhibit superior performance in 3D cell tracking, they had not been used to date because of the difficulty in manually preparing large quantities of training data, especially for 3D + T images. To solve this problem, we generated a synthetic training dataset via artificial modification of a single volume of worm 3D cell positional data, which produced excellent performance by our method. These results demonstrate that the deep network technique can be used for cell tracking by using an synthetic point set dataset, although the procedures for generating the dataset are simple. Not only is our method flexible and efficient, it can be easily used by researchers. For example, our method worked well in all the diverse conditions tested with only minor modifications. Notably, under constant imaging conditions, our method can be directly reused without modifying any Table 3. Evaluation of large RM values in each dataset. We evaluated how challenging the tracking task is in each dataset using the metric 'relative movement' (RM). When RM !0.5, the cell cannot be simply tracked by identifying it as the closest cell in the next volume (see Figure 4A). When RM !1.0, the task becomes even more challenging. Note that large RM is just one factor making the segmentation/tracking task challenging. Lower image quality, photobleaching, three dimensional movements with unequal resolutions, lower coherency of cell movements, etc. also make tracking tasks more challenging. Some datasets share the same movement statistics because they are degenerated from the same dataset by adding noise or by modifying resolution. Also see Figures 4 and 10 1 and 2), making it convenient for end-users. This differs from conventional image processing methods, in which slight differences in the obtained data, such as in light intensity, resolution, the size of the target object etc, generally require the re-setting of multiple parameters through trial-and-error. Even when imaging conditions are substantially changed, our method requires only a few modifications, primarily in the segmentation process: (1) modifying the structure of the 3D U-Net (this step can be skipped because a 3D U-Net of the same structure can be adapted to a new dataset by re-training; see Materials and methods); (2) re-training the 3D U-Net; and (3) modifying parameters according to the imaging conditions (see Tables 1 and 2 and the 'Guide for parameters.md' file in https://github.com/WenChentao/3DeeCellTracker). For re-training the 3D U-Net, manual annotation usually takes 2-3 hr for 150-200 cells, and the network can be automatically trained in 1-2 hr on our desktop PC with a single GPU. The FFN for tracking generally does not require re-training. The number of parameters to be manually determined is much smaller in our method than in conventional methods ( Table 1) due to its use of deep learning. The parameters can be quickly modified (within 1 hr) following the guide we have provided in the GitHub repository. Nevertheless, our method can be improved in two ways: more reliable tracking and a simplified procedure. Tracking reliability can be affected by large movements, weak intensities, and/or photobleaching. As revealed in our results (Figure 6), large movements such as in a freely moving worm can be resolved using the ensemble mode, which borrows the idea from ensemble learning in machine learning, that is using the average of multiple predictions to reduce the prediction error (Polikar, 2009). A similar idea, matching cells using multiple reference volumes and a clustering method instead of averaging, was applied in the previous study (Nguyen et al., 2017), suggesting that ensemble learning is a good approach to resolving challenging movements. On the other hand, the problem of weak intensity and photobleaching remains to be solved. One possible approach would be to normalize the intensities to obtain similar images over time, although doing so might not be easy. Figure 10. RM values at different time points in three datasets. Also see Figure 4 and Table 3. The online version of this article includes the following figure supplement(s) for figure 10: To further simplify the entire procedure, we contemplate developing new network structures that combine additional steps. The U-Net and 3D U-Net are networks for semantic segmentation, which classify each voxel as a specific category, that is as either a cell or a non-cell region. Beyond this, networks have been designed to achieve instance segmentation by further separating objects in the same category into individual objects, eliminating the need to use a watershed for separating connected cells. Although recent advances have been made in these architectures, the focus is still on segmenting common objects in 2D images (Liang et al., 2016;Romera-Paredes and Torr, 2016;He et al., 2017). We suggest that instance segmentation is a possible approach for simplifying and improving cell segmentation in future studies. Another possible area for improvement is the use of FFN for tracking. By further improving the FFN structure and using more training data, the network should be able to generate more accurate matches that can directly be used for tracking cells without point set registration. We developed 3DeeCellTracker mainly using semi-immobilized worm datasets. However, it also successfully processed 3D + T images of a zebrafish dataset obtained using the SCAPE 2.0 system (Voleti et al., 2019). This system is quite different from the spinning disk confocal system used for worm datasets in resolution, z-depth, and applied optical sectioning principle (Bouchard et al., 2015). While SCAPE is an original and outstanding method for enabling ultra high-speed 3D + T image acquisition, it had been difficult to obtain or develop software that can efficiently process the 3D + T images produced by the system. In this study we tracked 3D + T images obtained from the SCAPE system by simply modifying a few parameters, which allowed us to obtained an acceptable result (87% of large cells correctly tracked). Considering that the lower performance relative to other datasets might have arisen from the difficulty in segmenting the smaller, low-intensity cells ( Figure 7B, the upper and the middle panels), the result may be improved by further optimization of the segmentation. We also successfully tracked a large number of cells (~900) in a 3D MTCS monitored using a twophoton microscope, a result that further supports the wide applicability of our method. Our method cannot track cells that are dividing or fusing, or many cells that enter the field of view during the recording. This is because it operates under the assumption that each cell has a unique corresponding cell in another volume in order to match cells with large movements. To handle cells with division, fusion, or entry, it will be necessary to integrate our algorithms with additional algorithms. In summary, we have demonstrated that 3DeeCellTracker can perform cell segmentation and tracking on 3D + T images acquired under different conditions. Compared with the tracking of slowly deforming cells in 2D + T images, it is a more challenging task to track cell nuclei in a semiconstrained/freely moving worm brain, beating zebrafish heart, or 3D tumor spheroid, all of which undergo considerable movements in 3D space. We consider this to be the first report on a pipeline that efficiently and flexibly tracks moving cells in 3D + T images from multiple, substantially different datasets. Our method should enable the segmentation and tracking of cells in 3D + T images acquired by various optical systems, a task that has not yet been performed. . Arrows indicate all the correctly tracked cells in our method and in DeepCell 2.0. Cells without arrows were mistracked. The asterisk indicates a cell whose centroid moved to the neighboring layer (z = 10) and thus was not included in the evaluation. Also see Figure 11-video 1 and Table 4. (B) Comparison with Toyoshima's software tested using all layers in worm #3 with 1/5 sampling rate. For demonstration, we only show tracking results at z = 9. Again arrows indicate all the correctly tracked cells in our method and in Toyoshima's software. Because Toyoshima's software is not able to label the tracked cells using different colors, all cells here are shown by purple circles. Some cells were not marked by circles because they were too far from the centroids of the cells (in other layers). See also Table 4. All scale bars, 20 mm. The online version of this article includes the following video and figure supplement(s) for figure 11: Materials and methods Figure supplement 1. Comparision of the tracking accuracies between our method and two previous methods. Figure 11-video 1. Tracking results of a 2D + T image (z = 9 of worm #3) using our method and DeepCell 2.0. https://elifesciences.org/articles/59187#fig11video1 Computational environment Our image processing task was performed on a personal computer with an Intel Core i7-6800K CPU @ 3.40 GHz x 12 processor, 16 GB of RAM, and an Ubuntu 16.04 LTS 64-bit operating system. We trained and implemented the neural networks with an NVIDIA GeForce GTX 1080 GPU (8 GB). The neural networks were constructed and implemented using the Keras high-level neural network API (https://keras.io) running on top of the TensorFlow machine-learning framework (Google, USA). All programs were implemented within a Python environment except for the image alignment, which was implemented in ImageJ (NIH; RRID:SCR_003070), and the manual labeling, manual correction and manual confirmation, which were implemented in ITK-SNAP (RRID:SCR_002010; http://www. itksnap.org) or IMARIS (Bitplane, UK; RRID:SCR_007370). Instead of ITK-SNAP and IMARIS, one can use napari (https://napari.org) in the Python environment. Pre-processing Step 1: Because 2D images were taken successively along the z axis, rather than simultaneously, small or large displacements could exist between different layers of a 3D volume. Ideally, this should be compensated for before the segmentation procedure. Using the StackReg plugin (Thevenaz et al., 1998) in ImageJ (NIH), we compensated for the displacements by using rigid-body transformations to align each layer with the center layer in worm #1 and #2 datasets. However, we did not apply this alignment in worm #3, #4, zebrafish, and tumor spheroid datasets but still obtained acceptable results, indicating this step may be skipped. We evaluated the tracking accuracy in two 2D + T image datasets for the comparision with DeepCell 2.0, because DeepCell 2.0 currently cannot process 3D + T images. We also evaluated the tracking accuracy in three 3D + T image datasets for comparision with Toyoshima et al., 2016. For the zebrafish dataset, we only processed the initial 100 volumes because Toyoshima's software requires a very long processing time ( Step 2: Cells in the same image could have very different intensities, and detecting weak cells is generally difficult. To solve this problem, we applied local contrast normalization (Goodfellow et al., 2017) through a sliding window (27 Â 27 Â 3 voxels) so that all cells had similar intensities. This normalization was applied to the nucleus marker images only for tracking and did not affect the calculation of the signal intensities for either red fluorescent protein or GECI. 3D U-Net We used 3D U-Net structures similar to that shown in the original study (Ç iç ek et al., 2016). The network received a 3D image as input and generated a 3D image of the same size with values between 0 and 1 for each voxel, indicating the probability that the voxel belonged to a cell region (Figure 2figure supplement 1). We used different structures of 3D U-Net for different imaging conditions, in order to capture as much information as possible of each cell within the limitations of the GPU memory. Such modification of the U-Net structures is preferred but not necessary. The same structure can be reused on different datasets because of the flexibility of deep learning methods. The 3D U-Net structure shown in Figure 2-figure supplement 1A was used on our datasets worm #1 and worm #2, which have identical resolution, but we also successfully used the same structure in the binned zebrafish dataset (see below), which had very different resolution. For dataset worm #3, which had a lower resolution, we reduced the number of maxpooling and upsampling operations so that each voxel in the lowest layers corresponds to similar sizes as in the datasets worm #1 and worm #2. We also reduced the sizes of the input, output, and intermediate layers because of the lower resolution. As the smaller structure occupied less GPU memory, this allowed us to increase the number of convolutional filters on each layer so that the capacity of the network was increased (see Figure 2-figure supplement 1B). For the zebrafish dataset, although it has even lower resolution in the x and y axes, because the sizes of zebrafish cardiac cells are larger than worm neurons, we used the same number of maxpooling and upsampling operations as in datasets worm #1 and worm #2. We adjusted the sizes of layers in the x, y, and z dimensions to a unified value (=64) because the resolution in the three dimensions are not very different in the zebrafish dataset (Figure 2-figure supplement 1C). For simplicity, we reused the structures A and B in Figure 2-figure supplement 1 for worm #4, tumor spheroid and binned zebrafish dataset (see Table 2). The U-Net can be trained using very few annotated images (Ronneberger et al., 2015). In this study, we trained six 3D U-Nets: (1) for datasets worm #1 and #2, (2) for dataset worm #3, (3) for freely moving dataset worm #4, (4) for the zebrafish dataset, (5) for the dataset tumor spheroid, and (6) for the binned zebrafish dataset. Each 3D U-Net used one 3D image for training. Note that, although datasets worm #1 and #2 are substantially different with respect to signal intensity and cell movements, the same trained 3D U-Net was used. The image was manually annotated into cell regions and non-cell regions using the ITK-SNAP software (http://www.itksnap.org). We used the binary cross-entropy as the loss function to train the 3D U-Net. Because the raw image sizes were too large (512 Â 1024 Â 28, 256 Â 512 Â 20, 180 Â 260 Â 165, etc.) for computing in the GPU, we divided the raw images into small sub-images that fit the input sizes of the three 3D U-Nets structures (160 Â 160 Â 16, 96 Â 96 Â 8, or 64 Â 64 Â 64), and combined the cell/non-cell classifications of sub-images to form a final classification of the whole image. To improve the 3D U-Net performance, we increased the training data by data augmentation: We applied random affine transformations to the annotated 3D images by 'ImageDataGenerator' class in Keras. The affine transformation was restricted in the x-y plane but not in the z-direction because the resolution in the z-direction is much lower than that in the x-y plane for worm datasets and the tumor spheroid dataset (see panel A in Figure 3, Figure 3-figure supplements 1, 2 and 3, and Figure 8). Although the zebrafish dataset has similar resolutions in the x, y, and z directions, we applied the same affine transformation for simplicity. We trained the U-net for datasets worm #1 and #2 using a 3D image from another dataset independent of #1 and #2 but obtained under the same optical conditions, and its classifications on datasets worm #1 and #2 were still good (panel C in Figure 3, Figure 3-figure supplements 1 and 2), indicating a superior generalization ability of the 3D U-net. Because only one dataset is available for the specific resolutions of dataset worm #3, #4, the zebrafish heart, and the tumor spheroid, we trained the 3D U-Net by using the first volume of 3D + T images of each dataset and then applied the 3D U-Net to all the following 3D images of the datasets. Watershed The 3D U-Net generated probability outputs between 0 and 1, which indicated the probability that a voxel belonged to a cell-like region. By setting the threshold to 0.5, we divided the 3D image into cell-like regions (>0.5) and non-cell regions ( 0.5). The cell-like regions in the binary images were further transformed into distance maps, where each value indicated the distance from the current voxel to the nearest non-cell region voxel. We applied a Gaussian blur to the distance map to smooth it, and searched for local peaks which were assumed to be cell centers. We then applied watershed segmentation (Beucher and Meyer, 1993), using these centers as seeds. Watershed segmentation was applied twice; the first application was 2D watershed segmentation for each x-y plane, and the second application was 3D watershed segmentation for the entire 3D space. Two segmentations were required because the resolutions in the x-y plane and the z-dimension differed. Feedforward network: architecture An initial matching, that is, a set of correspondences between cells in two temporally adjacent volumes, is the first step for cell tracking in our pipeline and critical for the final tracking accuracy. The correspondences can be estimated based on the relative cell positions, assuming that these positions do not substantially change even during organ deformation. By comparing the similarity between the relative positions of two cells in different volumes, we can determine whether they are the same cells. One conventional method to represent relative positions is fast point feature histograms (FPFH) (Rusu et al., 2009). The PR-GLS study (Ma et al., 2016) successfully used the FPFH method to match artificial point set datasets. However, we found that FPFH yielded a poor initial match for the datasets considered in this study (Figure 2-figure supplement 3A), perhaps because of the sparse distribution of the cells. We thus designed a three-layer feedforward network (FFN) to improve the initial match ( Figure 2B). The three-layer structure generated good match results comparable with those of more complex structures with four or six layers. By comparing the representations between two points, the network generated a similarity score between two cells. The initial matching based on the similarity score by the FFN was more accurate than that achieved by the FPFH method ( ). The normalized positions were then sorted by their absolute values in ascending order. Finally, the mean distance d was included as the last value, so each point was represented by a 61D vector. We utilized the first fully connected layer to calculate the learned representation of the relative positions of each point as a 512D vector ( Figure 2B, the first hidden layer after the input). We then applied a second fully connected layer on these two 512D vectors to compare the representations of the two points. The resulting 512D vectors (the second hidden layer after the input) were processed by a third fully connected layer to obtain a single similarity score between 0 and 1, which indicated the probability of two points originating from the same cell. We matched the two points with the highest scores in two different volumes, ignored these two points, and matched the next set of two points with the highest scores. By repeating this process, we obtained an initial match ( Figure 2C, panel 4-1). Feedforward network: training In this study, we only trained one FFN based on one original image of dataset worm #3 and used the network in all the datasets including worms, zebrafish and tumor spheroid. To train the network, we first performed segmentation on a single volume of dataset worm #3 and obtained a point set for the centers of all cells. Because we required a large number of matched point sets for training, and because manually matching point sets is time-consuming and impractical, we created an synthetic training dataset by applying random affine transformations to the point set described above and adding small random movements to each point according to following equations: is the transformed 3D position. A is a matrix to apply the random affine transformation. More specifically, A ¼ I þ U, where I is a 3 x 3 identity matrix, and U is a 3 x 3 random matrix with each element U ij from a uniform distribution. We used U ij~u niform À0:05; 0:05 ð Þ in this study. " 1 ! is the 3D vector for adding random movements to each point in a point set, while " 2 ! is for adding even larger random movements in a subset of points (20 out of 175 cells) to simulate serious errors from segmentation procedures. We used " 1;i~u niform À2; 2 ð Þ and " 2;i~u niform À5; 5 ð Þ in this study. By randomly generating U ij , " 1;i and " 2;i , we could generate an arbitrarily large number of new point sets with new positions from the original point set. After that, we chose a specific point A and another point B from each generated point set and the original point set, respectively, and we calculated their relative positions as inputs for the FFN. In half of the cases, points A and B are corresponding, that is, they come from the same cell, while in the other half, point A is from a cell adjacent to the cell of point B, thus they are not corresponding. In this study, we used 576,000 newly generated pairs of points A and B for training the FFN ( Figure 1C). We used binary cross-entropy as the loss function to train the FFN. During the training, the performance of matching by FFN was gradually improved using an independent test dataset of two point sets (Figure 2-figure supplement 2B). PR-GLS method The initial match calculated using FFN was corrected using the expectation-maximization (EM) algorithm in the PR-GLS method, as described in the original paper (Ma et al., 2016). In the original study, the initial match (by FPFH) was recalculated during the EM iterations; however, in most datasets, we calculated the initial match only once (by the FFN) before the EM steps were performed, which did not cause problems. Only for the dataset worm #4 with very large movements, we recalculated initial matching by FFN after every 10 iterations, in order to improve accuracy. After the PR-GLS corrections, we obtained coherent transformations from the points of each volume to the subsequent volume ( Figure 2C, panel 4-2). Single and ensemble modes The FFN + PR-GLS can predict new cell positions at time t from the cell positions at t-1; this is the default mode of our pipeline, which is referred to as the single mode ( Figure 6C). At a sufficiently high sampling rate, the single mode is reasonable because the movement from t-1 to t is much smaller than that from t-i (i > 1) to t, making the prediction from t-1 more reliable. When cell movements are very large (such as in a freely moving worm), and the sampling rate is not sufficient, the prediction from t-1 to t becomes less reliable, and the average of multiple predictions from t-i to t may be more accurate. Therefore, we developed an approach using the average of multiple predictions, referred to as the ensemble mode ( Figure 6C). We tested this mode only on the worm #4 dataset, which had quite large movements ( Figures 6B and 10, and Table 3), because the runtime of the ensemble mode is much longer than that of the single mode, that is, the runtime of the FFN + PR-GLS component is proportional to the number of predictions used. Specifically, the runtime including segmentation and tracking for worm #4 was approximately 30 volume/h in the single mode and 6 volumes/h in the ensemble mode. In the ensemble mode, we calculated an average of up to 20 predictions from previous time points. In cases for which t 20, the averages was calculated from time points [t-1, t-2, . . ., 1]; in cases for which t > 20, it was calculated over [t-d, t-2d, . . ., t-20d], where d is the quotient (t-1)//20. Accurate correction for tracking By applying the PR-GLS method to the initial match, we obtained a more reliable transformation function in which all obvious incorrect matches were corrected. However, small differences still existed in a few cells, which could accumulate over time to become large differences without correction. Thus, we included an additional automatic correction step, in which the center position of each cell was moved slightly toward the centers of each 3D U-Net detected region (for details, see Figure 2-figure supplement 4). After correction, all cells were moved to the estimated positions with their shapes unchanged from volume #1. For most of the datasets, we applied only one correction; for the worm #4 and tumor spheroid datasets, we applied corrections up to 20 times, until achieving convergence. If multiple cells overlapped in the new positions, we applied the watershed method again to assign their boundaries. In carrying out this step, we calculated the images of the tracked cell regions based on an interpolation of the raw image from volume #1, when the resolution along the z axis was much lower than that in the x-y plane (i.e. all datasets except for the worm #4 and the zebrafish). We did this in order to obtain more accurate estimates for each cell region. Manual correction of segmentation We manually corrected the segmentation only in volume #1. We superimposed the automatically segmented regions of the volume #1 3D image onto the raw 3D image in the ITK-SNAP software, and we discarded false positive regions, such as autofluorescence regions and neuronal processes. If any cells were not detected (false negative error), we reduced the noise level parameter in the preprocessing (Table 1) which can eliminate such errors, or we manually added these cells in ITK-SNAP. Oversegmentation and undersegmentation regions were corrected carefully by considering the sizes and shapes of the majority of cells. The overall error rates depend on the image quality, but we usually found that around 10% of cells required manual correction which usually took 2-3 hr (for 100-200 cells). Visual inspection of tracking results We counted the tracking errors of all cells by visually inspecting the tracking results in each volume (Figure 1-figure supplement 2). To confirm the tracking of worm's neurons, we combined two 3D + T images-the raw images and the tracked labels-in a top-bottom arrangement displayed as a hyperstack by the ImageJ software to compare the cell locations in each volume. As the cells in worms' datasets primarily moved in the x-y plane ( Figures 3F and 9L, and Figure 3-figure supplement 2F), we observed the correspondence between the raw images and the tracked labels in each x-y plane to identify tracking errors in the results. To confirm our tracking of the tumor spheroid dataset, we applied the same method although there were many small cross-layer movements in the images ( Figure 8F and G). It was more difficult to confirm the tracking results in the hyperstack images for the cells in the freely moving worm and the zebrafish heart than in the semi-immobilized worm and tumor spheroid, due to the frequent occurrence of large movements of cells across layers ( Figures 6G and 7G). Therefore, we superimposed images of the tracked labels onto the raw images and imported them into IMARIS (Bitplane, UK), and then visually checked the tracking results of each cell individually in 3D mode. For the zebrafish dataset with repeated oscillations, we tracked and checked all 98 cells from 1000 volumes. Because the freely moving worm engaged in irregular movements in three directions, visual checking was more challenging, and thus we only tracked and checked the initial 500 volumes of 3D images out of 1519 original volumes ( Figure 6F). Evaluating large movements Large movements of cells is one issue that makes tracking challenging. To evaluate how challenging each cell movement is, we defined the 'relative movement' (RM) of cell A at time t as: RM = (movement of cell A from t-1 to t) / (distance between cell A and its closest neighboring cell at t). Figure 4A middle and right panels illustrate two cells moving in one dimensional space with RM !0.5. In this condition, a very simple tracking method, 'search for the closest cell at the next time point' will mistakenly match the cell A at t = 2 to the cell B at t = 1. Therefore, we argue that movements with RM !0.5 are more challenging than movements with RM <0.5. It should be noted that large movement is not the only challenge for tracking. For example, the zebrafish datasets have much higher error rates if we evaluate the tracking in all cells (Table 3), which is likely caused by the weak intensities in these small cells and the photobleaching ( Figure 7B and F; Figure 7-figure supplement 1). Extracting activities After having tracked the worm's neurons from the first volume to the last volume, we extracted activities from the regions corresponding to each neuron. By measuring the mean intensities of each neuron in both channels corresponding to the GECI and the positional markers, the activity was computed as GCaMP5G/tdTomato in dataset worm #1. We similarly extracted calcium activities (GCaMP) of heart cells in the zebrafish dataset and the FRET activity in the tumor spheroid dataset. Comparison of tracking accuracy between our method and two other methods Because DeepCell 2.0 currently only supports tracking cells in 2D datasets, we tested it using two layers of images (z = 9 and z = 16) from worm dataset #3, which include relatively large numbers of cells. We excluded cells that disappeared or appeared due to cross-layer movements for a fair comparison. We supplied DeepCell 2.0 with the precise segmentations from our method in order to focus on comparing the performance of the tracking algorithms. We tested Toyoshima's software using two worm datasets and one zebrafish dataset. For the zebrafish dataset, we only tested the initial 100 volumes because their method required a longer time for the imaging processing than ours did (see Table 5). For these two methods, we communicated with the corresponding authors and did our best to optimize the parameters based on their suggestions. Nevertheless, it is possible that those analyses could be further optimized for our datasets. The runtimes of our pipeline and previous methods for the tested datasets We tested the runtimes of our method and the previous methods for different datasets (see Table 5). Because DeepCell 2.0 currently can only track 2D + T images, the runtime was estimated by using the average runtime for one layer and multiplying that by 21 layers, in order to compare it with our method. As a result, DeepCell 2.0 required a runtime comparable with our method. On the other hand, Toyoshima's software took a much longer time than our method to process the worm and zebrafish datasets. In our method, the initial matching using our custom feedforward network is performed in a pairwise fashion, so the time complexity is O(n 2 ), where n is the number of detected cells. In our tested datasets with 100~200 cells, this did not take a long time, for example,~3.3 s were required for matching 98 cells between two volumes (zebrafish), or~8.6 s for 164 cells (worm #1a) using our desktop PC. In cases where the n is too large, the runtime may become much longer, for example,~4 min for 901 cells (tumor spheroid). If both n and volume number are too large, it may be required to optimize the method to reduce the runtime, for example, by restricting the calculation of matchings in a set (100~200 or more) of representative cells (e.g. large/bright cells), while movements of other non-representative cells can be estimated from the movements of these representative cells utilizing the coherency of these movements in a deforming organ. Worm strains and cultivation The techniques used to culture and handle C. elegans were essentially the same as those described previously (Brenner, 1974). Both TQ1101 lite-1(xu7) and AML14 were obtained from the Caenorhabditis Genetics Center (University of Minnesota, USA). Young adult hermaphrodites were used in the imaging experiments. Worm datasets In this study, we used four worm datasets. The 3D images in datasets worm #1 and #2 were obtained using our custom-made microscope system, OSB3D (see below). The worm strains for datasets worm #1 and #2 are KDK54165 (RRID:WB-STRAIN:KDK54165) and AML14 wtfEx4[rab-3p:: NLS::GCaMP6s: rab-3p::NLS::tagRFP] (RRID:WB-STRAIN:AML14) (Nguyen et al., 2016), respectively. The 3D + T images of the dataset worm #3 were published previously with the worm strain JN2101 Is[H20p::NLS4::mCherry]; Ex (Toyoshima et al., 2016). The 3D + T images of the freely moving worm dataset (worm #4) were published previously with the worm strain AML14 (Nguyen et al., 2016;Nguyen et al., 2017). In three (#79, #135, and #406) out of the 500 volumes of the worm #4 dataset, most of the cells disappeared from the volume or a large amount of noise occurred, which rendered the dataset impossible to analyze. These volumes were therefore manually skipped for tracking, that is the cells were assumed not moved from the previous volumes. Spinning disk confocal system for 3D + T imaging of the worm's brain We upgraded our robotic microscope system (Tanimoto et al., 2017) to a 3D version. We used a custom-made microscope system that integrated the Nikon Eclipse Ti-U inverted microscope system with an LV Focusing Module and a FN1 Epi-fl attachment (Flovel, Japan). The excitation light was a 488 nm laser from OBIS 488-60 LS (Coherent) that was introduced into a confocal unit with a filter wheel controller (CSU-X1 and CSU-X1CU, respectively; Yokogawa, Japan) to increase the rotation speed to 5,000 rpm. The CSU-X1 was equipped with a dichroic mirror (Di01-T-405/488/561, Semrock) to reflect the 488 nm light to an objective lens (CFI S Fluor 40X Oil, Nikon, Japan), which transmitted the GCaMP fluorescence used for calcium imaging and the red fluorescence used for cell positional markers. The laser power was set to 60 mW (100%). The fluorescence was introduced through the CSU-X1 into an image splitting optic (W-VIEW GEMINI, Hamamatsu, Japan) with a dichroic mirror (FF560-FDi01, Opto-line, Japan) and two bandpass filters (BA495-540 and BA570-625HQ, Olympus, Japan). The two fluorescent images were captured side-by-side on an sCMOS camera (ORCA Flash 4.0v3, Hamamatsu, Japan), which was controlled by a Precision T5810 (Dell) computer with 128 GB RAM using the HCImage Live software (Hamamatsu) for Windows 10 Pro. A series of images for one experiment (approximately 1-4 min) required approximately 4-15 GB of space, and were stored in memory during the experiment, then transferred to a 1-TB USB 3.0 external solid-state drive (TS1TESD400K, Transcend, Taiwan) for further processing. For 3D imaging, the z-position of the objective lens was regulated by a piezo objective positioner (P-721) with a piezo controller (E665) and the PIMikroMove software (PI, Germany). The timings of the piezo movement and the image capture were regulated by synchronized external edge triggers from an Arduino Uno (Arduino, Italy) using 35 ms intervals for each step, in which the image capture was 29.9 ms. For each step, the piezo moved 1.5 mm, and one cycle consisted of 29 steps. We discarded the top-most step because it frequently deviated from the correct position, and we used the remaining 28 steps. Note that one 3D image was 42 mm in length along the z-axis, which was determined based on the typical diameters of neuronal cell bodies (2-3 mm) and of a young adult worm's body (30-40 mm). Each cycle required 1015 ms; thus, one 3D image was obtained per second. This condition was reasonable for monitoring neuronal activities because the worm's neurons do not generate action potentials (Goodman et al., 1998) and because many neuronal responses change on the order of seconds (Nichols et al., 2017). We also tested a condition using 10 ms for each step and 4.9 ms for an exposure with the same step size and step number per cycle (i.e. 2.3 volumes of 3D images per second), which yielded a comparable result. For cyclic regulation of the piezo position, we used a sawtooth wave instead of a triangle wave to assign positional information because the sawtooth wave produced more accurate z positions with less variance between cycles. Zebrafish heart cells Sample preparation and imaging have been described previously (Voleti et al., 2019). In brief, GCaMP and dsRed were expressed in the cytosol and the nuclei of myocardial cells, respectively. The 3D + T images obtained with the SCAPE 2.0 system were skew corrected to account for the oblique imaging geometry and analyzed with 3DeeCellTracker. The 3D coordinates obtained from this study were used to extract calcium dynamics in the cells in the previous study (Voleti et al., 2019). Tumor spheroid The tumor spheroid was cultured according to the previous procedures (Vinci et al., 2012;Yamaguchi et al., 2021). In brief, a suspension of HeLa cells (RRID:CVCL_0030, RCB0007, Riken Cell Bank; certified as mycoplasma-free and authenticated by using STR profiling) expressing the FRETtype ERK sensor EKAREV-NLS (Komatsu et al., 2011) was added to a PrimeSurface 96 well plate (MS-9096U, Sumitomo Bakelite) for 3D cell culture at a density of 1200 cells/well. The grown spheroids were transferred to 35 mm-dishes coated with poly-L-lysine (Sigma-Aldrich) and further grown with DMEM/F-12, no phenol red (ThermoFisher) containing 10% FBS and 1% penicillin and streptomycin. The 3D + T images of the spheroid were recorded using a two-photon microscope equipped with a water-immersion objective lens (N25X-APO-MP 25x CFI APO LWD objective, Nikon) and a high-sensitivity gallium arsenide phosphide (GaAsP) detector (A1R-MP+, Nikon). An 820 nm optical pulse generated by a Ti:Sapphire laser was used for the excitation. The fluorescence was split using two dichroic mirrors (FF495-Di03 and FF593-Di02, Opto-line), and the split lights below 495 nm and between 495-593 nm were detected by independent channels (CH1 and CH2, respectively). The fluorescence signals were integrated four times to increase the signal-to-noise ratio. Each step size was 4 mm along the z-axis, and each volume comprised of 54 steps, requiring 5 min for recording. Cells that experienced cell death or cell division were excluded manually after the events when evaluating the tracking accuracy. Code availability statement The code for tracking cells and for training neural networks, the demo data, the pre-trained weights of the neural networks, and instructions for installation and use of the code are available in http:// ssbd.qbic.riken.jp/set/20190602/ (Demos190610.zip). An updated version of the code can be found in https://github.com/WenChentao/3DeeCellTracker. The guides for using the codes and for setting parameters have also been included in the same GitHub repository. Note in proof In the original version of the paper, we tracked the datasets by our original version of programs based on the package "3DeeCellTracker 0.2" (see the version information at https://pypi.org/project/3DeeCellTracker/#history). The runtime for the tracking process was not optimized at that time, which sometimes could lead to a long runtime for tracking, especially in the ensemble mode and/or when the cell number is large. In addition, our previous tracking program did not hide the details irrelevant to the end-users and did not provide useful feedback on the intermediate segmentation/ tracking results to users. To solve these two issues, we first improved the performance of the codes related to "FFN" and "PR-GLS" using the vectorization techniques available in the "NumPy" package in Python. We also improved the performance of the codes related to "accurate correction" by extracting and performing only once the previously repeated calculations in the loops. Note that we did not change our programs related to segmentation (3D U-Net + watershed), which may still require considerable time depending on factors such as the image size and structure of the 3D U-Net. As a result, our new program accelerated the speed by 1.7-6.8 times in the tested datasets ( Table 6). Such acceleration was especially pronounced in ensemble mode (worm #4) and when the cell number is large (3D tumor spheroid). Second, we simplified the program by moving the details into the package. As a result, users can track cells by simply running a few commands in Jupyter notebooks. We also added new functions to show the intermediate results of segmentation and tracking so that users can now use these results to guide their setup of the segmentation/tracking parameters. To use these new features, users can follow the updated instructions in the README file in our GitHub repository to install the latest version of 3DeeCellTracker (currently 0.4.0) and use our Jupyter notebooks. The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
2021-03-31T06:16:37.118Z
2021-03-30T00:00:00.000
{ "year": 2021, "sha1": "87203ae7c30e55f95232a3ea840471ade5c684ee", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.59187", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1d4def5377643a3e72d74165388ebc188cc18c36", "s2fieldsofstudy": [ "Computer Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
202906006
pes2o/s2orc
v3-fos-license
Alternative bioenergy through the utilization of Kappaphycus alvarezii waste as a substitution of substrate for biogas products Biogas is one of the renewable energy resources which are able to be developed by providing some sufficient renewable substances and manufactured from the fermentation process of organic substances metabolized by anaerobic bacteria. In this research, Kappaphycus alvarezii seaweed waste from carrageenan processing and contents of rumen were used. This research aims to comprehend the carrageenan processing waste of macroalga K. alvarezii can be used as alternative source generating biogas. The research method is P0 (100 % of the contents of rumen), P1 (75 % of the contents of rumen and 25 % of seaweed waste), P2 (50 % of the contents of rumen and 50 % of seaweed waste), and P3 (25 % of the contents of rumen and 75 % of seaweed waste), and P4 (100 % of seaweed waste). The result showed that according to the quality determination of biogas based on SNI (Indonesia National Standard) 8019:2014, the K. alvarezii seaweed waste from carrageenan processing can be utilized as the alternative source of manufacturing biogas and got the methane gas resulted from the comparison method is P2 (50 % of the contents of rumen and 50 % of seaweed waste), with value of 58.61 %. Introduction The need for fuel has increased along with population growth, regional development and energy crisis in the world. The scarcity of fuel caused by significant rise of world's oil prices has pushed the government to invite people to deal with energy. Furthermore, many studies have been conducted in order to find alternative energy sources as renewable resources [1]. An increase in the population's demand for energy, the depletion of world's oil reserves and the problem of fossil fuel emissions put pressure on every country to immediately produce and use renewable energy [2]. Indonesia has various natural resources which can be utilized as energy. Natural resources in the form of oil, gas, coal, geothermal heat, water and so forth. They can be used in various development activities directly or exported to earn state revenues. These materials are not renewable. The use of non-renewable resources causes crisis problems. One of the signs of energy crisis occurring lately is scarcity of oil fuels, such as kerosene, liquefied petroleum gas, gasoline and diesel fuel. Scarcity happens because the need for fuels is very high and keeps on increasing every year. On the contrary, crude oil as the raw material for making fuel is limited and it takes time for the fuel to be produced [3]. Hence, alternative energy is necessary for preserving oil reserves at the present time. Biogas is one of energy sources that can be developed by providing raw materials available in the nature in sufficient amount (renewable condition). Renewable energy sources are relatively easy to obtain, require low operating expanses, and do not cause any waste problem thus oil shortage can be overcome. A biogas is a flammable gas derived from the process of fermentation of organic materials by anaerobic bacteria that do not need oxygen to survive and reproduce. Generally, all types of organic materials like rumen contents and animal's urine suitable for simple biogas system can be processed to produce biogas [4]. Biogas can be used to produce methane as a substitute for fuels, especially kerosene, and can be used for cooking. On a large scale, biogas can be used as the material for power plant. In addition, the process of biogas production will produce residue of cattle's rumen contents that can be used directly as organic fertilizers for crops or agricultural cultivation. More importantly, it can be used to reduce the dependence on non-renewable fuels [5]. One of the commodities in Indonesian waters that is potential to be developed is seaweed. Seaweed is widely used in and profitable for the industries of food and beverage, personal care, cosmetics, animal feed, steel, ceramic, paint, ink, mining, coal briquette and asphalt, paper and pulp, textile, fertilizer and medicine (pharmacy) as food fibers [6]. Seaweed can be processed into various food and will produce waste (unused), so what is available can be utilized. According to Alamsjah and Prayogo [7], seaweed waste is usually left accumulating at landfills, but it has the potential to be processed into biogas. An example of seaweed is Kappaphycus, that can be extracted into carrageenan. The waste from the processing of carrageenan is still underutilized. According to Saputra et al. [2], biogas may come from various organic matters such as cattle's rumen contents, feces, scrap paper and aquatic plants (e.g. hyacinth, filamentous algae), seaweed and seaweed waste. The C/N ratio in the making of biogas should be noted. The C/N ratios for Kappaphycus and carrageenan waste are 43.98/L and 55.01/L, respectively. Cattle's rumen contents are the most suitable starter in observing the use of biogas because they have C/N ratios of between 11 and 30 [8]. Another important thing found in the substratum of rumen of a cow is bowel bacteria producing methane. The bacteria in the bowel of ruminants can break down organic compounds using fermentation, so that the waste processing in anaerobic animals can produce gases consisting of methane (CH 4 ) and carbon dioxide (CO 2 ). The substrate in cattle's rumen contents consist of bacteria that produce methane in the stomach of ruminants. The bacteria in the bowel of a ruminant help with fermentation process, thus the biogas production in digester's body can be done faster. In addition, fresh rumen contents are easier to process compared with long and/or dried rumen contents during the drying time [9]. Methodology A research method was used to solve a problem, which may be done by data collection through observation, survey and experiments [10]. The method used in this research was descriptive method. Descriptive method is a planned attempt to express new facts, corroborate a new theory, or even deny the existing research results. The study was done by observing and comparing the amount and volume of methane and carbon dioxide and the C/N ratio resulting from each research treatment. The materials used in this study were 10,000 g of carrageenan processing waste of Kappaphycus, 10,000 mL of fresh rumen contents and 4,000 mL of water. Into each treatment was added 200 mL of water. The following was used in this research: P0 = 1,000 mL of rumen contents + 0 g of carrageenan processing waste (100 % : 0 %); P1 = 750 mL of rumen contents + 250 g of carrageenan processing waste (75 % : 25 %); P2 = 500 mL of rumen contents + 500 g of carrageenan processing waste (50 % : 50%); P3 = 250 mL of rumen contents + 750 g of carrageenan processing waste (25 % : 75 %); and P4 = 0 mL of rumen contents + 1000 g of carrageenan processing waste (0 % : 100 %). The materials used in the preparation of anaerobic fermentation process were carrageenan seaweed waste (substrate), rumen contents (co-substrate) and water. The addition of water as one of the materials was aimed to meet the water levels required for biogas production. The number of bioreactor or digester used in this research was 20. There were 5 treatments and they were repeated 4 times. The volume of a bioreactor was 1,500 mL, and the bioreactor was made from plastic, with an airtight cover, faucet and hose. The cover was connected to the faucet, that was connected to the hose. The hose was connected to top gas in the form of a plastic bag fastened with rubber band and cling warp. The substrate consisted of cattle's rumen, carrageenan waste and water. Cow's rumen and carrageenan waste were mixed at a ratio as specified. Every tube digester will contain filling materials (processed carrageenan, seaweed and rumen contents) and water at 80% of the fermenter volume [11]. The fermentation system in the biogas production in this research used a closed system (batch fermentation) for 21 days of fermentation. For every treatment, four tests were conducted. The biogas produced filled the space in the bioreactor, then moved toward the hose to be accommodated at the gas on the top. The gas to be accommodated was directly analyzed according to the length of time of fermentation. According to Anggraini et al. [12], the length of time of fermentation required to produce methane (CH 4 ) was 21 days. The biogas was analyzed by considering the levels of methane parameters, volume of gas and carbon dioxide. The biogas sample was extracted into a plastic bag, then the volume of the biogas was measured using a volume meter and the levels of methane (CH 4 ) and carbon dioxide (CO 2 ) was measured using gas chromatography. The analysis of C/N ratio was conducted by measuring the C-organic content and the total nitrogen of the sample. The analysis of C-organic was conducted in this research using cremate method. Included in the procedure of C-organic analysis was the weighing of an empty cup (a) of each sample treatment at about 1 g. Then, the sample was put into an oven for 4 hours at a temperature of 105 ºC. Then, the cup was cooled in desiccator for 15 minutes. The sample and the cup were weighed again after being removed from desiccator (c). In the next step, the cup that contained the sample was put in a furnace (600 ºC for 4 hours), then the cup was incorporated into a desiccator after being removed and weighed for the final measurement (d). The calculation for this analysis used the equation below: % of water content = x 100% (1) % of organic substrate = 100 -(% of water content -% of ash content) (3) % of C-organic = % of organic substrate x (4) Note: is the conversion of organic substrate to carbon The analysis of the total nitrogen in this research used Gunning method. This analysis was conducted in three stages, namely destructor, distillation and titration. The destructor stage was aimed to destroy material (sample) through the addition of K 2 SO 4 and CuSO 4 . One g sample was put into Kjeldahl squash and added with 10 mL of H 2 SO 4 , 5 g of K 2 SO 4 , 0.3 g of CuSO 4 and stones boiling. Next, the mixture was heated on an electric heater, firstly at low heat and then at high heat. Warming ended after the solution became colorless. As much as 100 mL of ddH 2 O, 1 g of Zn and 50 mL of 45% NaOH were added to the Kjeldahl squash. Then, the Kjeldahl squash was mounted in the stage distillation. The mixture was distilled until the volume of distillation reached a value of 75 mL. The titration was started by pouring 75 mL of distillate on titration squash, added with 50 mL of 0.1 N HCl and PP indicator. The titrant used was 0.1 N NaOH. Titration was carried out to figure out the volume of NaOH needed to neutralize sample. The levels of N (%) can be calculated using the equation below: x N NaOH x 14.008 x 100% The C/N ratio could be calculated after obtaining the C-organic level and the total nitrogen value. The C/N ratio can be calculated using the equation below: Results and discussion This research yielded the methane levels, C/N ratio (before and after fermentation), pH and temperature value (before and after fermentation). The C/N ratio of rumen contents to seaweed waste was 41.04:36.5. The organic carbon of rumen contents was 28.15 and the total nitrogen was 0.68, while the organic carbon of seaweed waste was 26.80 and the total nitrogen was 0.73. The measurement of the biogas production in this research was conducted in terms of quality (levels of methane in biogas) and quantity (biogas volume). The methane levels and the biogas volume produced in each of the rumen contents and seaweed carrageenan waste are displayed in table 1. Note: P0 (100 % : 0 %) = 1000 mL of rumen contents + 0 g of carrageenan processing waste; P1 (75 % : 25%) = 750 mL of rumen contents + 250 g of carrageenan processing waste; P2 (50 % : 50 %) = 500 mL of rumen contents + 500 g of carrageenan processing waste; P3 (25 % : 75%) = 250 mL of rumen contents + 750 g of carrageenan processing waste; P4 (0 % : 100 %) = 0 mL of rumen contents + 1000 g of carrageenan processing waste. Table 1 showed that there was a dynamic of methane levels for each ratio of rumen contents to seaweed waste in all treatments. Treatment P2 yielded the highest average levels of methane (58.61%), while the highest average value of biogas volume was yielded by treatment P3 (1306 mL). If the levels of methane and the biogas volume were related, the following equation was obtained: The biogas production can also be linked with the C/N ratio. The C/N ratios in the beginning and the end of the treatments for 21 days of fermentation are shown in Figure 3. contents + 500 g of carrageenan processing waste; P3 (25 % : 75%) = 250 mL of rumen contents + 750 g of carrageenan processing waste; P4 (0% : 100%) = 0 mL of rumen contents + 1000 g of carrageenan processing waste; average temperature in the beginning of fermentation; average temperature in the end of fermentation. Figure 5. The average pH for the beginning and the end fermentation on every comparison rumen contents and carrageenan processing waste Note: P0 (100 % : 0 %) = 1000 mL of rumen contents + 0 g of carrageenan processing waste; P1 (75 % : 25 %) = 750 mL of rumen contents + 250 g of carrageenan processing waste; P2 (50 % : 5 0 %) = 500 mL of rumen contents + 500 g of carrageenan processing waste; P3 (25 % : 75 %) = 250 mL of rumen contents + 750 g of carrageenan processing waste; P4 (0 % : 100 %) = 0 mL of rumen contents + 1000 g of carrageenan processing waste; average pH in the beginning of fermentation; average pH in the end of fermentation. The pH and temperatures rose at the end of fermentation, whereas temperature in the beginning of fermentation ranged 30.38-31.63 °C, which increased to 31.38-32.75 °C at the end of fermentation. Meanwhile, the pH in the beginning of fermentation ranged at 6-8 and increased to 6.5-8.5 at the end of fermentation. In the anaerobic fermentation process to form biogas, bacteria have a very important role in reorganizing organic matter. All reactions during fermentation involved bacteria since the stage of hydrolysis, acidogenesis, acetogenesis until methanogenesis. The process of biogas formation must be in anaerobic condition, which means that the digester must be airtight and no air can come into it. The processing of organic compound contained in waste into methane and carbon dioxide does not require oxygen. If there is a leak in the digester during the fermentation, methane will not be formed. In general, the process of biogas making uses cow dung because cattle dung is the most suitable starter for the formation of biogas [9]. In this research, the starter used was rumen contents, while a substrate used were carrageenan processing waste. According to Triwisari [13], reports that the main compositions of carrageenan processing were cellulose (26.72 %) and carbon (46 %). Different organic contents can affect the formation of biogas and methane [14]. According to Anggraini et al. [12], the methane content in biogas produced on a reactor depends on the type of feeds, input composition, fermentation time and reactor capacity. The highest methane was produced in P2 (50 %:50 %) with 500 mL of rumen contents and 500 g of carrageenan processing waste. This may happen because the raw materials of carrageenan processing waste was not degraded by bacteria completely, while rumen contents as a source of methane which have organic components were degraded by the bacteria. Organic component is a source of nutrients for bacteria to produce biogas. According to Saputro [15], Mara and Ida [16], add that organic component affects bacteria productivity in producing biogas. The biogas volume produced was different from the levels of methane but was still influenced by the ratio of rumen contents to carrageenan processing waste. The measured biogas volume was influenced by the length of fermentation. In this research the duration of time required for fermentation was 21 days. The duration of time applied in this research was referred to the research of [12], which states that the length of fermentation time required to produce the highest amount of methane (CH 4 ) recorded is 21 days. According to Mara and Ida [16], the main factor affecting the difference in the biogas volume produced is the physical properties of material caused by water content and acidity of media (pH levels). The ratio of the rumen contents to carrageenan processing waste and the addition of water will result in different properties in the blending condition of each composition . Furthermore accoording to Budiharjo [17] claims that the increase in the volume of biogas volume is an indication that the system has been running well and stable. According to Calzada et al. [18], Indarto [19], the composition and the volume of biogas depends on the different characteristics of the substrate, while the highest volume of biogas is not always offset by high levels of methane. According to Yulistiawati [20] says that the lowest C/N ratio in the substrate will produce biogas with characteristics such as low level of CH 4 and H 2 and high level of CO 2 and N 2 . According to Indonesia National Standart [21] reports that biogas generally consists of CH 4 (40-70 %) and CO 2 . According to Sooerawidjaya [22], biogas consists of CH 4 (50-80 %), CO 2 (20-50 %) and several other gases such as H 2 , CO, N 2 , O 2 and H 2 S. The results of the test conducted on methane to find out the ratio of rumen contents to carrageenan processing waste in this treatment showed 58.61% of methane, thus this level of methane met the standard amount of biogas. Biogas production was also linked to the change in C/N ratio that showed a decline during fermentation (21 days). The decrease of C/N ratio shows that nutrition has been used well. The C/N concentration declined with fermentation time. The decline in carbon and nitrogen was the result of biodegradation of organic matter by bacteria originating either from rumen contents or carrageenan processing waste. It is used as a source of energy for the growth of bacteria and the formation of biogas. This phenomenon was reported by Yani and Darwis [23], who states that the decrease in the C/N ratio means that carbon and nitrogen are used as nutrients by microbes to grow and develop. Furthermore, according to Siallegan [24], shows that the reduction in C/N ratio can cause the production of biogas to stop, which in this case, the C/N ratio cannot help bacteria to produce biogas. This means that this substance can be used as fertilizer. The C/N ratio allowed for an ingredient to make solid fertilizer ranges at 12-25 [25]. The process of fermentation also affects the temperature and pH [26,27,28]. According to Sofyan [29], the enzyme of bacteria will change the substrate into a product and yield heat. The release of the heat involved an exothermic reaction for organic matter decomposition to the process of biogas formation in the acidifying stage. Material decomposition will result in acids, carbon dioxide and heat. The process of organic matter decomposition takes place in anaerobic condition, thus the system will produce heat, and consequently, the temperature in the reactor increases and affects the microorganisms and microbes in the fermentation process. The other thing that changed during the fermentation process is pH. Anaerobic biodegradation to organic matter will be influenced by the environment in biodigester. Acid environment is appropriate for anaerobic biodegradation. The degradation by bacteria will be affected by pH. Furthermore, [17,30,31] conclude that pH will increase after the fermentation. Bacteria can live at pH of 6.5-8.5 (optimally at pH of 7.0-8.0). This is due to the fact that the acid production will convert into ammonia and alkali compound. Afterwards, pH will decline again and become stable at the end of fermentation. The role of pH is important because such compounds produce acids and alkalis, such as organic acids and ammonium ion for the degradation of organic compounds in digester. According to kresnawaty et al. [32] states that at the beginning of biogas formation, acid bacteria will be active and cause pH in the digester to decrease. Then, methanogen bacteria will use the acid as a substrate, and the pH increases. This is supported by Sejati [28], who explains that the acid formed in the acidification will be used by methanogen bacteria as a substrate for producing methane and others. Subsequently, it will increase pH in digester. This
2019-08-19T05:04:36.970Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "14bb95bac6d3cd4f105a8ed131ce3ab8fd8dec48", "oa_license": "CCBY", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/137/1/012057/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "97b02b2ff5359c339c396c3bcd2b9f0c38023283", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Environmental Science" ] }
266290333
pes2o/s2orc
v3-fos-license
Extreme Birkeland Currents Are More Likely During Geomagnetic Storms on the Dayside of the Earth We examine the statistical distribution of large‐scale Birkeland currents measured by the Active Magnetosphere and Planetary Electrodynamics Response Experiment in four unique categories of geomagnetic activity for the first time: quiet times, storm times, quiet‐time substorms, and storm‐time substorms. A novel method is employed to sort data into one of these four categories, and the categorizations are provided for future research. The mean current density is largest during substorms and its standard deviation is largest during geomagnetic storms. Current densities which are above a low threshold are more likely during substorms, but extreme currents are far more likely during geomagnetic storms, consistent with a paradigm in which geomagnetic storms represent periods of enhanced variability over quiet times. We demonstrate that extreme currents are most likely to flow within the Region 2 current during geomagnetic storms. This is unexpected in a paradigm of the current systems in which Region 1 current is generally larger. 10.1029/2023JA031946 2 of 16 of storm-time substorms have shown that both R1 and R2 current are intensified during these events (Mishin et al., 2020), consistent with substorms generally (Coxon et al., 2014b(Coxon et al., , 2017)), although it has also been shown that Birkeland currents are highly filamentary in both substorms (Forsyth et al., 2014) and storm-time substorms (Nakamura et al., 2016).Recently, the Substorm Onsets and Phases of the Electrojet (SOPHIE) technique (Forsyth et al., 2015) provides a method to identify the start (and therefore the end) of every substorm phase.This data set allows for a time series to be categorized by whether any given timestamp is in a substorm, and in which phase; it has been exploited in various statistical studies (e.g., Coxon, Freeman, et al., 2018;Forsyth et al., 2016). Geomagnetic storms have been well-examined for decades (Akasofu et al., 1963;Chapman & Bartels, 1940), preceded by study of what were simply called magnetic storms over a century ago (Birkeland, 1908(Birkeland, , 1913;;Chapman & Ferraro, 1931).Gonzalez et al. (1994) note that it was once thought that these storms were simply collections of substorms, but suggest that storms and substorms are interrelated but distinct phenomena.The Dst index has been used extensively to understand storm dynamics (Akasofu et al., 1963), as geomagnetic storms lead to a build up of the ring current which in turn causes characteristic signatures in the equatorial ground magnetometers used to produce the Dst index.Yokoyama and Kamide (1997) argued that the intensity of the storm is linked to its duration.Strong IMF B Z is a good predictor of a geomagnetic storm (Burton et al., 1975;Gonzalez & Tsurutani, 1987;Kokubun, 1972;Loewe & Prölss, 1997;Tsurutani et al., 1992), while solar wind pressure is more relevant than B Z for storm sudden commencements (SSCs) (Taylor et al., 1994).During geomagnetic storms, the enhanced ring current retards the onset of nightside reconnection and thus the onset of substorms, allowing the auroral oval to reach larger sizes prior to substorm expansion phase onset (Milan, 2009;Milan, Grocott, et al., 2009;Milan, Hutchinson, et al., 2009).Recently, it has been demonstrated that storm times are vital to understanding extreme GIC signatures (Smith et al., 2019(Smith et al., , 2021)), and work has been done to explore Birkeland currents during geomagnetic storms (e.g., Kleimenova et al., 2021;Lukianova, 2020aLukianova, , 2020b;;Maute et al., 2021;Ovodenko et al., 2020;Pedersen et al., 2021Pedersen et al., , 2022Pedersen et al., , 2023)).Hutchinson et al. (2011) developed a method to algorithmically identify the individual phases (initial, main, and recovery phases) of geomagnetic storms, finding that the duration of the main phase increased with storm intensity up to a point but then started to decrease again, which they argued was contrary to Yokoyama and Kamide (1997).This method was adapted by Walach and Grocott (2019), who made a small change to the way the start of the main phase is determined and investigated convection patterns during geomagnetic storms.Murphy et al. (2018) presented a storm list which was defined in terms of storm peak (i.e., minimum Dst) and the start and end of the storm.These more recent lists allow for a time series to be categorized by whether or not any given timestamp is in a geomagnetic storm, and timestamps can be further subdivided by storm phase. Field-aligned currents, proposed by Birkeland (1908Birkeland ( , 1913) ) and known as Birkeland currents, are an important component of solar wind-magnetosphere-ionosphere coupling, especially during active intervals of geomagnetic storms and substorms.They are known to chiefly comprise two rings of current which encircle the geomagnetic pole, offset toward the nightside (Iijima & Potemra, 1978): Region 1 (R1) on the poleward side and Region 2 (R2) on the equatorward side.R1 is upward on the dusk side of the polar cap and downward on the dawn side, and R2 vice versa.There are other Birkeland currents: NBZ currents flow during northward IMF, hence their name, and are observed poleward of R1 (Iijima et al., 1984;Zanetti et al., 1984).Cusp currents are also observed poleward of R1 during southward IMF, and have morphology determined by IMF B Y (Iijima & Potemra, 1976b;Saunders, 1989).Further high-latitude currents were reported as mantle currents, associated with antisunward convection flows and the velocity shear as they crossed the polar cap (Ohtani et al., 1995a(Ohtani et al., , 1995b)).Ohtani et al. (1995b) noted that "The term 'region 0' has been used in the past to refer to any [Birkeland current] system poleward of region 1 currents," but it seems that cusp and mantle currents are driven by different physical mechanisms. The Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) provides measurements of Birkeland currents from 2010 onwards based on magnetometer measurements from the Iridium Communications Network constellation (Anderson et al., 2000(Anderson et al., , 2021;;Waters et al., 2001Waters et al., , 2020)).Iridium data were used to study Birkeland currents during two geomagnetic storms (Anderson et al., 2002), concluding that the Birkeland current systems intensified and moved equatorward with southward IMF B Z during geomagnetic storms, and that they moved equatorward more quickly when the solar wind pressure was higher in storms: this has been shown statistically (Coxon et al., 2014a(Coxon et al., , 2014b;;Carter et al., 2016).AMPERE was used by Coxon et al. (2016) to show that the Birkeland currents in the Northern Hemisphere are typically stronger than those 10.1029/2023JA031946 3 of 16 in the Southern Hemisphere (Coxon et al., 2022;Laundal et al., 2017) and has also been used to explore the timescales of Birkeland currents during solar wind driving (Anderson et al., 2014;Coxon et al., 2019;Forsyth et al., 2018;Kunduri et al., 2020;Milan et al., 2018;Shore et al., 2019).A review of magnetospheric currents and their generation in solar wind-magnetosphere-ionosphere coupling was conducted by Milan et al. (2017) and a review of research with AMPERE was presented in Coxon, Milan, and Anderson (2018). In this paper, we follow on from work on the underlying probability distributions of Birkeland current densities.Super Dual Auroral Radar Network (SuperDARN) data have been used to obtain ionospheric vorticities which are closely related to Birkeland currents (Chisham et al., 2009;McWilliams et al., 2001;Sofko et al., 1995).These data were used by Chisham and Freeman (2010) to show that the distributions of vorticity magnitude had more kurtosis than normally-distributed quantities (they were leptokurtic).Chisham and Freeman (2021) fit q-exponential functions to the distributions to obtain the survival function of the distributions, and therefore the probabilities of observing extreme values of ionospheric vorticity.Coxon et al. (2022) performed a similar analysis on AMPERE-derived Birkeland currents from 2010 to 2012, and found that they are also well-described by a q-exponential distribution.They found that the probability of current densities above a given threshold was higher in the Northern Hemisphere than in the Southern for currents at multiple amplitude thresholds, and that the disparity was greater at larger thresholds.They also found that extreme currents were most probable in the average R2 current region on the dayside, at a colatitude of 18°-22°, but had the sense of the average R1 current on the dayside.They identified two paradigms which could explain the results: 1. Extreme currents occur in R1 at the point when the current ovals are most expanded.Counter-intuitively, this means that the underlying distribution of the R1 current system changes as the polar cap expands.2. Extreme currents occur in R2 due to closure through an intensified ring current during geomagnetic storms. Counter-intuitively, this means that extreme R2 currents occur in the opposite sense to the R2 current system.Coxon et al. (2022) did not conclusively show which paradigm was responsible for their results, noting that both paradigms might play a role and highlighting that filamentary currents may also be a factor (Forsyth et al., 2014;Liu et al., 2021;Nakamura et al., 2016).Building on the work of Coxon et al. (2022), we break down the distributions of Birkeland currents by storm or substorm phase to examine how large-scale geomagnetic activity impacts on the occurrence of extreme Birkeland currents.We define four categories based on whether a timestamp is within a storm, a substorm, both, or neither.We then subdivide the AMPERE data set according to the different categories (Section 2), demonstrating that the mean current density is generally larger during substorms but that the standard deviation is generally larger during storms (Section 3).In Section 4, we examine the probability of low, high, and extreme currents and find that extreme currents are most likely to be observed during storm times and storm-time substorms on the dayside of Earth.We then subdivide into R1 and R2 current regions to resolve the ambiguity between the paradigms described above. Birkeland Currents in Different Event Categories We employ AMPERE data between 2010 and 2017, comprising processed magnetic field measurements from the Iridium telecommunications network (Anderson et al., 2000(Anderson et al., , 2021;;Waters et al., 2001Waters et al., , 2020)).The data set is available in files comprising a single day in a single hemisphere, and each file gives Birkeland current densities on a grid of 24 hr of MLT within 50° colatitude of the pole in Altitude-Adjusted Corrected Geomagnetic (AACGM) coordinates.We adopt the common convention that upward current is positive and downward current is negative.Each grid is available in a sliding window 10 min long, evaluated every 2 min, such that each day contains 720 timestamps. In order to conduct the analysis herein, we analyze the AMPERE data set and find the list of days for which all 720 timestamps are available.This gives 2,291 days for our analysis in the Northern Hemisphere and 2,324 in the Southern Hemisphere.In this reduced data set, we iterate through each timestamp and categorize it in one of four categories: To identify substorms we use SOPHIE (Forsyth et al., 2015), extended to the end of 2017.SOPHIE is defined using percentile thresholds on the rate of change of the SML index (Newell & Gjerloev, 2011) called Expansion Phase Thresholds (EPTs); we use an EPT of 75%, which means that a substorm is identified as a negative rate of change in SML above the 75th percentile in each year; in Forsyth et al. (2015) the authors recommend this be used due to similarity with other lists.In addition to the phase descriptions, an "SMU check" flag is set for periods of enhanced convection (when SMU and SML are expected to intensify in tandem) to differentiate them from substorms (where SMU and SML are expected to intensify separately).We count a timestamp as being within a substorm if it is between the start of the substorm expansion phase and the end of the substorm recovery phase as determined by SOPHIE. We do not count timestamps as being within a substorm if they are during expansion/recovery phases for which the SMU check flag is set, nor do we count expansion or recovery phases which do not follow in order; that is to say, we do not count expansion phases which occur immediately before growth phases and we do not count recovery phases which occur immediately after growth phases. To identify storms we use the Walach and Grocott (2019) list, extended to the end of 2017.We count a timestamp as being within a storm if it is between the start of the storm initial phase and the end of the storm recovery phase.We define quiet times as any times that have not been categorized as within a storm or substorm according to the above description.We stress that the "quiet times" and "storm times" categories do not contain any substorms. Lists of the timestamps in each category are presented in Supporting Information S1, and the numbers of events in each category are as follows.In quiet times, there are 1,119,424 timestamps in the North and 1,134,888 in the South.In storm times, 94,238 and 95,895.In quiet-time substorms, 382,654 and 387,967.In storm-time substorms, 53,204 and 54,530.To make the four categories statistically similar, we subsample randomly to reduce each to 53,000 maps.It is for this reason that we consider data for an 8-year period as opposed to the 3-year period used previously (Coxon et al., 2022); this allows our subsamples to be as large as possible.(The subsamples are presented in Supporting Information S1). Mean Birkeland Current Density per Category Figure 1 shows the mean current density for the four different categories outlined in Section 2 and Figure 2 shows the standard deviation for each of the categories.Figures 1a and 1b show quiet times (non-storm, non-substorm) in the Northern Hemisphere and Southern Hemisphere respectively.The means are weaker in the Southern Hemisphere than in the Northern Hemisphere but the morphology is very similar.The R1 and R2 current systems are clearly visible; the R1 current system lies between 10° and 15° colatitude on the dayside (at 11 and 13 MLT) and between 16°-21° colatitude on the nightside (at 01 and 23 MLT).The R2 current system is equatorward of R1, and slightly thicker; it has a larger latitudinal extent.We interpret the latitudinal extent of each region as a result of averaging over the spatial variation of the current ovals, rather than a sign that the current sheet is getting thicker.There are also current systems poleward of R1 on the dayside which could be NBZ or R0 current systems.Figures 2a and 2b shows the standard deviation, which is larger for currents closer to the pole on the dayside (i.e., in the cusp/R0/ NBZ current system). Figures 1c and 1d show the means for storm times.The regions of non-zero mean current here have a larger latitudinal extent than those for quiet times.The polemost boundary of the R1 current system is the same distance from the pole as in quiet times, but the equatormost boundary is further from the pole.Similar to above, we interpret this increased latitudinal extent as a signature of averaging over a larger range of current oval positions.Since the R1 current oval is located within 1° of the open/closed field line boundary (OCB) (Clausen et al., 2013b), this implies that geomagnetic storms lead to greater variability of the location of the OCB, and that the OCB reaches lower latitudes during geomagnetic storms than outside these storms.This is consistent with enhanced dayside reconnection combined with the ring current retarding the onset of nightside reconnection (Milan, 2009;Milan, Grocott, et al., 2009;Milan, Hutchinson, et al., 2009).The peak mean current densities are generally similar to quiet times, which may indicate that although the currents vary more spatially they are not, on average, more intense than during quiet times.From the standard deviation of storm times in Figures 2c and 2d we can see that the variability of the value of current density is higher for storm times than for quiet times across the spatial range observed.As such, although the mean current is very similar, the likelihood of large currents will be higher; this could be because the mean current is being smoothed over a larger area for storms compared to quiet times due to the motion of the OCB (we address this in more detail in Section 4). Figures 1e and 1f show the means for quiet-time substorms.The mean R1 and R2 current densities during substorms are stronger than they are in the quiet or storm-time categories, whereas the currents poleward of the R1 current system are much weaker.The R1 currents are stronger than the R2 currents, consistent with previous observations (Coxon et al., 2014b(Coxon et al., , 2017)).The R1 current system has a larger latitudinal extent, extending further equatorward than in the previous two categories but with a poleward boundary location which remains the same.Looking at Figures 2e and 2f, the standard deviations are smaller than for storm times, suggesting that substorms do not lead to variability of current density as high as in storms. Finally, Figures 1g and 1h show the means for substorms within storms.The mean current densities are substantially larger than for any other category, and the equatorward edge of the R1 current system is further from the pole than in any other category.The R2 current system also has a larger latitudinal extent.The mean current density poleward of R1 is approximately the same as for quiet-time substorms.Figures 2g and 2h shows that the standard deviation in this category is far higher than for the previous three categories. The means and standard deviations indicate that geomagnetic storms and substorms have different impacts on Birkeland current density, suggesting that substorms lead to higher current on average but that storms are responsible for driving periods of high variability.However, this inference is reliant on describing the underlying distributions by the mean and standard deviation, thus occluding a great deal of the underlying variability, especially in the extremes that contribute most to the higher-order moments of the distribution (Coxon et al., 2022).In order to properly quantify the difference between geomagnetic storms and substorms, we proceed to examine the probabilities of low, high, and extreme currents seen during these two events. Probabilities of Birkeland Current Density per Category We investigate the spatial distributions of different current density thresholds by calculating the probability of current densities above those thresholds in each bin.We refer to these probabilities as P(J) where J is the threshold we set.This method was described fully in Coxon et al. (2022), but briefly recapping it here: we use maximum likelihood estimation to estimate the probability distribution of the underlying data, and then apply the survival function using the probability distribution to recover the probabilities P(J).We fit the probability distribution on either side of the mode current j m separately, such that we derive the probability distributions for j > j m and j < j m ; for ease of discussion we will refer to these as positive and negative current respectively through the rest of this manuscript. One update to our method over our previous work is that in cases where Ridders' Method (Ridders, 1979) does not find a solution to the q-exponential fitting, we use a brute force method to explore the parameter space assuming a solution is within the constraints 0.98 ≤ q ≤ 1.02 and 0.0 ≤ κ ≤ 0.5.The former assumption is valid because Ridders' Method only fails to converge when q is close to 1. Therefore, the white "holes" in the maps of the probabilities seen in Coxon et al. (2022) are not reproduced here.We note that we are modeling the underlying probability distributions, rather than applying probability thresholds to the data directly; this means we can extrapolate to events more extreme than those seen in our interval, but that our selected thresholds do not map to percentiles of the data.We calculate the probability for any appreciable current flow by using a threshold J = 0.2 µA m −2 in Section 4.1; for high current flow by using a threshold J = 1.0 µA m −2 in Section 4.2; and for extreme current flow by using a threshold J = 4.0 µA m −2 in Section 4.3.We select these thresholds to enable comparison to the figures presented in Coxon et al. (2022).We only show plots for the Northern Hemisphere; the corresponding probabilities for the Southern Hemisphere are generally lower, consistent with Coxon et al. (2022), and are presented in Supporting Information S1.We also present bar charts showing the values of the maximum probabilities in each of the maps below (Figures 3-6 and 8-10) in Supporting Information S1 as an aid to the reader. Probability of Low Current Flowing Figure 3 shows the probability P(0.2), which is the chance of relatively low current densities (and above); we use this to interpret the probability of appreciable Birkeland current density in a given bin.This minimum threshold also ensures that current densities in each bin are above the 3σ level of the AMPERE data set (Anderson et al., 2014).Figures 3a and 3b shows that, for quiet times, the probability is highest (40%) in R1 current on the dayside, and lower for R2 current with no day/nightside dependence.The fact that dayside R1 current is most probable reflects the fact that dayside current reacts to dayside reconnection, and that R1 current reacts to dayside reconnection before the R2 currents (Anderson et al., 2014;Coxon et al., 2019).Currents on the nightside react to nightside reconnection and are therefore more likely to be seen in either of the substorm categories.Figures 3c and 3d shows that the probability of any current flowing during storm times is very similar to quiet times, but current is more likely to flow further from the pole than it is during quiet times.We interpret this as an indication of the polar cap reaching larger sizes during storms. Figures 3e and 3f shows the probability of any current flowing during quiet-time substorms.The probability is highest in R1 at 50% compared to 35% in R2.The spatial extent is similar to that for storm times, but has a larger latitudinal extent on the dayside.In this category, there is no clear difference between dayside and nightside R1 probability whereas in R2 the nightside probability is slightly higher.R2 also has higher probabilities in the quiet-time substorms category than in storm times: this is notable because R2 current is thought to close through the partial ring current (Iijima & Potemra, 1978) and storms are associated with elevated ring current, whereas substorms have not explicitly been linked to enhanced ring current other than through storms.If the ring current is typically more enhanced during the main phase of a storm, this observation might be explained by the main phases being shorter than the recovery phases (Hutchinson et al., 2011;Walach & Grocott, 2019).Alternatively, this may simply be because authors have found that R2 current intensifies as part of the substorm current wedge (Anderson et al., 2014(Anderson et al., , 2018;;Coxon et al., 2017;Forsyth et al., 2018;Sergeev et al., 2014aSergeev et al., , 2014b)).Figures 3g and 3h shows the probability of any current flowing within stormtime substorms.The probability here reaches 60% for R1 current, and is 40% for R2 in the positive current but only 35% for R2 in the negative current, which appears to be a signature of a dawn-dusk asymmetry.This implies that R1 current is more likely than not to be at appreciable current densities when a substorm occurs within a storm, and that an appreciable R2 current is also likely.Both are more likely than in a substorm outside of a storm. There is an asymmetry between positive and negative current in each category which we interpret as a dawn-dusk asymmetry; R1 current is more likely for negative current and R2 current is more likely for positive current (the probabilities are more likely on the dusk side).We discuss this in detail in Section 5.2. Probability of High Current Flowing Figure 4 shows the probability of high current flowing P(1.0).In Coxon et al. (2022) we found that there were two major zones of probability, referred to as Zone A and Zone B (see Figure 5); we adopt the same convention for Figures 4 and 6 in order to avoid making judgments about whether these currents are part of the R1 or R2 current systems.Zone A refers to the more poleward zone of probability, which is on the dusk side for positive current densities and on the dawn side for negative current densities; Zone B refers to the more equatorward region which is on the opposite side to Zone A. The peak probability in Figures 4a and 4b is approximately 1%, which is far lower than the probability in Figure 3; this is as expected given that we have increased the threshold.In Figures 4a and 4b, during quiet times, the probability of high current flow is much higher in Zone A than in Zone B (the probability is not zero in Zone B, but it is very low).We note that there is again a dawn-dusk asymmetry.Figures 4c and 4d shows that the probability of high current is 2-3 times as high in storms as it is for quiet times, and the area of Zone A which is likely to host high current is much wider.Additionally, there is a likelihood of high current in Zone B, which is approximately as likely as in quiet-time Zone A. The difference between panels a-b and c-d in Figure 4 is much larger than the equivalent difference in Figure 3. Figures 4e and 4f shows the probability during quiet-time substorms, which has a slightly higher peak in Zone A than for storm times but is more spatially constrained.The probability of current in Zone B is lower than for storm times.Figures 4g and 4h shows that the probability during storm-time substorms is at least twice as high as in the previous two categories, with the probability in Zone B much stronger and further equatorward than in the previous categories and the spatial range of probability in Zone A much larger than in any previous category.Notably, the probability of current is higher on the dayside than for the nightside in all categories, including substorms. Probability of Extreme Current Flowing Figures 6a and 6b shows the probability of extreme current flowing P(4.0) during quiet times.The peak likelihood is extremely small (0.04%) in a very small strip in Zone A, with no visible signature in Zone B. The probability of extreme negative current is smaller (0.02%).The probability of extreme current for storm times (Figures 6c and 6d) is far higher than in quiet times (in contrast to P(0.2) and P(1.0)), with the peak 10 times higher and in a much higher spatial extent in Zone A, and a thinner region in Zone B clearly visible at 0.7%. Notably, the probability of extreme current in quiet-time substorms (Figure 6e) is much more spatially constrained and approximately one third the probability of storm-time extreme currents in Zone A, and has no visible signature in Zone B. As in Figure 4, Figures 6e and 6f shows that the probability of extreme current is larger on the dayside than on the nightside during quiettime substorms.For positive current in storm-time substorms (Figure 6g) the peak probability is higher than for storm times in both Zones A and B, and the region of probability is more spatially constrained; however, the difference between storm times and storm-time substorms is not large.Notably, extreme currents are most likely on the dayside and least likely within 3 hr of midnight, which also allows us to infer that storms are driving these currents. Probability Integrated Over Each Map Figure 7 shows the probability P(J) integrated over each map in Figures 3, 4, and 6.We multiply the probability in each cell by the area of that cell, and we sum over all MLT between 0° and 50° colatitude.We divide by the highest integrated value (across the categories and positive/negative current per threshold) to present relative integrals between zero and one.We caution these numbers are in arbitrary units and are only meaningful for comparisons between the maps in this study. The relative integrated probabilities for P(0.2) (maps in Figure 3) are presented in Figure 7 (left).There is little difference in the relative integral between negative and positive current in any category.The relative integrals for quiet times and for storm-time substorms are as expected from visual inspection of the maps; quiet times have the lowest integral and storm-time substorms have the highest.The integral for storm times is lower than that for the quiet-time substorms; however, the difference is less than a visual inspection of Figures 3c-3f indicates.This indicates that the spatial smoothing caused by higher variation in current location during geomagnetic storms is reducing the probability in any given bin, but the relative integral is only slightly lower than that for substorms. Figure 7 (center) shows the relative integrated probabilities for P(1.0) (maps in Figure 4).Those for quiet times are again lowest, and storm-time substorms are highest.However, in this case, the difference between the relative integrals for quiet times and for storm-time substorms is much larger.The relative integral for storm times is larger than that for quiet-time substorms, contrary to P(0.2).Further, the integrated probability is larger for positive current in all categories, which is the opposite sense to the asymmetry in P(0.2). Figure 7 (right) shows the relative integrals for P(4.0) (maps in Figure 6).This demonstrates the extent to which geomagnetic storms dominate over substorms.The gap between positive and negative current is much wider than at previous thresholds, and the asymmetry is in the same sense as for P(1.0). Contributions of R1 and R2 Currents To evaluate the relative contribution of R1 and R2 currents in different categories, we employ an adaptive coordinate system (Chisham, 2017) based on R1-R2 boundaries given in Milan (2019), which are derived from a method outlined in Milan et al. (2015) and subsequently used to calculate AMPERE proxies for the OCB (Burrell et al., 2020).We refer to these coordinates as Birkeland Current Boundary (BCB) coordinates.In our coordinate system, we iterate through each hour of MLT and shift the current systems in that sector such that the R1-R2 boundary is fixed at a colatitude of 20°.This means that any currents located poleward of 20° colatitude are R1 currents and any currents located equatorward are R2 currents, by definition.This method was used briefly in Coxon et al. (2022) to try to determine whether the most probable extreme currents were in R1 or in R2. Figure 8 shows the probability of currents above four thresholds (0.2, 1.0, 2.0, and 4.0 µA m −2 ) for storm times.Panels a-b show the lowest thresholds, and confirm that the regions in Figure 3 map to R1 and R2 current as inferred in Section 4.1.Panels c-d show that Zone A primarily maps to R1 current for P(1.0) and Zone B primarily maps to R2 current.In panel d, the zones are well-separated into R1 and R2, but in panel c the zones are both partly over the 20° colatitude line.This indicates either that the zones are comprised of both R1 and R2 for positive current, or it indicates that the coordinate system is not successfully disentangling R1 and R2 current.For positive current (panels e and g) as the current threshold is raised, Zone A moves equatorward (i.e., the amount of Zone A which is comprised of R1 current decreases) and Zone B moves poleward.For negative current (panels f and h), Zone A becomes less well-defined, shifts toward dawn, and the brighter part moves closer to the Figure 9 shows the probability of currents for quiet-time substorms.P(0.2) (panels a-b) is very similar to the previous figure.For P(1.0) (panels c-d) Zones A and B again primarily map to R1 and R2.Zone A has a larger latitudinal extent in the previous figure than for quiet-time substorms, but stretches further to the nightside here than it does in the previous panel; this is indicative of substorms driving currents on the night side due to nightside reconnection.In comparison to Figure 4, the probability of current on the nightside is higher than it was in AACGM coordinates, which may indicate that plotting the nightside probabilities in AACGM coordinates smooths the probabilities out.However, the probability of strong current is still higher on the dayside than it is on the nightside in BCB coordinates.In panels c-d the probability of Zone B is notably lower than in the storm times (as seen in Figure 4) which is consistent with storms driving more enhanced R2 current than substorms.At higher thresholds (panels e-h) the probabilities are much lower than for storm times and Zone B is invisible on the given color scale. Figure 10 shows the probability of currents for storm-time substorms.The morphology of Zones A and B is very similar between Figures 8 and 10 but the probabilities change; in Figures 10a-10f the probabilities are higher than in Figure 8, but in panels g-h the probabilities are smaller.This indicates that the most extreme currents are driven during geomagnetic storms during dayside reconnection, and are not driven by nightside reconnection (i.e., substorms) during these periods; this in turn has important ramifications for space weather forecasting and operational awareness. Discussion To summarize the results presented in Section 4, Figure 1 shows that mean R1 currents are typically larger than mean R2 currents, consistent with our previous work (Coxon et al., 2022) and with previous studies of the large-scale current systems (Anderson et al., 2008;Weimer, 2001).The difference between the means in the four categories is consistent with previous work on substorms, showing that the ratio of R1 to R2 current is larger during substorms than it is outside substorms (Coxon et al., 2014b) and that both current systems intensify during substorms (Coxon et al., 2017).Examination of the current density probabilities between the categories is consistent with the modern view of a storm as a distinct phenomenon (Gonzalez et al., 1994), as we will now discuss. The Difference Between Storms and Substorms Our results build a picture in which in which substorms are more likely to drive current in general, but the most extreme currents are much more likely to occur during storms.This is consistent with a view in which storms are characterized by systematically higher variability. Figure 3 shows that the probability of current density exceeding a low threshold is higher during substorms than it is outside substorms, which indicates that current is more likely to flow during substorm times.This is consistent with Figure 1 showing that mean currents are higher during substorms.The probability of high current is roughly the same in Figures 4c and 4d as it is in Figures 4e and 4f, indicating that both storms and substorms lead to higher chance of high current, but is much higher in Figures 4g and 4h indicating that high current is most likely in storm-time substorms.Figure 6 shows that the probability of extreme current is much higher during storm times than it is during substorm times, which is consistent with Figure 2. We find that extreme current densities are most likely on the dayside across all categories, with probabilities highest between 14-20 MLT and 04-10 MLT in storm times and storm-time substorms.This is consistent with the fastest convective flows during geomagnetic storms, which are also largely seen on the dayside (Walach & Grocott, 2019). These inferences are broadly reinforced by the probabilities plotted in BCB coordinates, which also demonstrate that current densities exceeding a low threshold are more likely in substorms but that extreme currents are more likely during geomagnetic storms .Using BCB coordinates we can attempt to disentangle which current system is responsible for the most extreme currents.We find that the highest probability of extreme current is located within the R2 current system in each of the combinations of substorms and storms examined.Interestingly, in this coordinate system, the highest probability is found during storm times; this may be a sign that the BCB coordinate system is better-ordered by dayside reconnection processes than by substorm processes, and is potentially a note of caution for future work that uses this coordinate system.We also note that the boundary is determined by the current systems seen at dawn and dusk, rather than at noon and midnight where the current systems are less well-defined (Iijima & Potemra, 1978). Extreme currents are located in the R2 current system.As previously noted in Coxon et al. (2022), this has interesting ramifications.The largest R2 currents flow in the opposite direction to the average R2 current system, which is consistent with previous reports of embedded Birkeland currents (Liu et al., 2021).Physically, we interpret this in the context of enhanced ring current during geomagnetic storms (Chapman & Ferraro, 1933) and the closure of the R2 current system through the ring current (Iijima & Potemra, 1976a, 1978).There is a small region of current ∼5° poleward of extreme R2 current shown in Figures 8 and 10 and it is not obvious which signature it corresponds to in the AACGM coordinates.However, this may imply that some extreme current is flowing in R1 during storm times, but only on the dawn side flowing into the ionosphere (We will return to this in the next section). In terms of storms and substorms, these results are consistent with work on the rate of change of the surface magnetic field (R).Smith et al. (2019) showed that, in the United Kingdom, more than 90% of the most extreme values of R 2.0), and (g, h) P(4.0).The top row is for positive current and the bottom row is for negative current.The parameters are plotted in R1-R2 coordinates (Milan, 2019).Note that the color scales are different for each column. were observed within 3 days of a sudden commencement (SC).They further subdivided SCs into sudden impulses (SIs) and SSCs and showed that the extreme values of R were much more common in SSCs than in SIs, indicating that the extreme behavior was primarily being observed within geomagnetic storms.Smith et al. (2021) showed that this was also generally true outside of the United Kingdom.Conversely, Freeman et al. (2019) defined extreme values of R as being in the 99.97th percentile, and showed that more than half of these values occurred within substorm expansion and recovery phases, noting that those times only comprised 13.4% of the data set, concluding that substorms were more likely than generally enhanced convection to display extreme behavior.They also found that at two of their magnetometers (Hartland and Eskdalemuir) the probability was higher on the dayside, but this was not true for Lerwick.They did not separate substorms according to whether or not they were in geomagnetic storms, however, and it is unclear how their definition of "extreme" corre sponds with ours. Dawn-Dusk Asymmetry in Probability In Figure 3, the probability of R1 current is more likely in the negative current across all categories, and the difference becomes more pronounced from quiet times to storm-time substorms; the R2 current probability is reversed, and R2 current is more likely in the positive current across all categories.This means that the dawn flank shows higher probabilities than on the dusk flank for both current regions.Examination of the corresponding Figure 8 in Coxon et al. (2022) shows the same effect, which was not highlighted in that manuscript.When we increase the thresholds, the extent of the dawn-dusk asymmetry changes: in Figure 4 Zones A and B are higher probability when they are located on the dawn side than when they are on the dusk side but the effect is less obvious.In Figure 6 the opposite is true in all the plots, such that the peak probability in each dusk zone is more pronounced than its dawn counterpart (For a key to Zones A and B, see Figure 5). Examining Figures 8-10, we see how this asymmetry manifests in BCB coordinates.Figures 8a and 8b shows P(0.2) for storm times, and there is no clear dawn-dusk asymmetry in the probabilities.For quiet-time substorms (Figures 9a and 9b) and storm-time substorms (Figures 10a and 10b) both zones A and B show a higher probability on the dawn side than the dusk side for P(0.2), and this is true for all three categories in P(1.0) in BCB coordinates (panels c-d in each figure).Then, for P(2.0) and P(4.0) the effect switches, as it does in AACGM coordinates. To interpret this asymmetry we first turn to the large-scale morphology shown in Figure 1.This average picture shows that on the dawn side of Earth, R1 current flows downward (into the ionosphere) and R2 current flows upward (out of the ionosphere).The reverse is true on the dusk side.It is thought that the majority of Birkeland current is carried by electrons (Hoffman et al., 1985) and therefore downward currents are associated with electrons traveling up from the ionosphere into the magnetosphere while upward currents are associated with electrons traveling from the magnetosphere into the ionosphere.Cowley (2000) notes that upward currents "are carried by hot magnetospheric electrons moving downwards into the mirror field geometry near the Earth" and that driving sufficient upward current to fulfill the current circuit at Earth requires potential drops to accelerate the electrons down the field lines (Knight, 1973), leading to highly non-linear effects.This may mean that the dawn-dusk asymmetry is caused by the relative abundance of current carriers; it will be easier to carry strong R1 current on the dawn side, owing to the fact that the R1 current is being carried by upward-flowing electrons from the ionosphere on that side.This would explain why the only evidence of extreme R1 current appears to be for downward R1 current flow. However, on the face of it, this argument seems to be at odds with the fact that R2 currents on the dusk side are also carried by upward-flowing electrons and these currents are also weaker than their dawn counterparts.The relationship between aurora and Birkeland current has been investigated parametrically (Carter et al., 2016), and there is a lack of correspondence on the dusk side between field-aligned currents and the auroral oval, which may lend credence to the idea that this is being driven by some asymmetry in the relationship between current and charge carriers.Conversely, McWilliams et al. (2001) used SuperDARN vorticity to calculate the quantity J ‖ /Σ P and found that using this method, upward field-aligned current was colocated with aurora from the Polar Visible Imaging System (VIS) in the post-noon sector.Chisham et al. (2007) presented Figure 10c from McWilliams et al. (2001) alongside data from Polar VIS and the Polar Ultraviolet Imager (UVI) in their Figure 11, demonstrating a correspondence between upward current and aurora on both sides of the polar cap.As far as we are aware, these studies are the only published comparisons between the system-scale positions of field-aligned currents and aurora. If we turn to a view of the system as a current circuit (e.g., Figure 1 in Cowley, 2000), we can see that on the dawn side the R1 current is flowing into the ionosphere and closing across the polar cap through the R1 current on the dusk side, as well as closing equatorward through the R2 current on the dawn side.This may mean that the strongest current would be expected to be the dawn-side R1 current shortly after both current systems first intensify (Anderson et al., 2014;Coxon et al., 2019).In the large-scale current circuit paradigm, current flowing out through R2 on the dawn side of the ionosphere is expected to close through the partial ring current on the nightside of Earth and back into the ionosphere through R2 on the dusk side (e.g., Ganushkina et al., 2018, and references therein).Any systematic dawn-dusk asymmetry in R2 over a long period of time would require current to flow out through R2 on the dawn side and then not flow back through R2 on the dusk side.This would require current to flow from R2 on the dawn side, through the ring current, and then close through some other current system which was not R2; this may indicate that the current flows are more complex than the large-scale current circuit. Conclusions The effects of geomagnetic storms and substorms can be differentiated by combining identification methods to identify times at which either or both phenomena are occurring (Forsyth et al., 2015;Walach & Grocott, 2019).We have combined these methods and compared the resulting categories in order to shed light on the ways in which these phenomena affect the probabilities of Birkeland current densities over an 8-year period. We have shown that geomagnetic storms are characterized by inherently more extreme behavior, and this means that storms are more likely to drive extreme currents such as those which are most likely to negatively affect operations and infrastructure (Eastwood et al., 2018).This is consistent with previous studies which have shown that the most extreme rates of change of the surface magnetic field are associated with SSCs (Smith et al., 2021).However, substorms are more likely to drive appreciable current than storms and consequently the mean currents during substorms are higher than during storms. In terms of location, we find that extreme currents are more likely on the dayside of Earth than the nightside and least likely within 3 hr of midnight.We have employed the boundaries between R1 and R2 currents in order to investigate how much each current system contributes to the probability of currents at certain thresholds.We show that the most extreme currents are most likely to flow in the R2 currents on the dusk side in every category. Figure 1 . Figure 1.Plots showing the mean current j for (a, b) quiet times, (c, d) storm times, (e, f) quiet-time substorms, and (g, h) storm-time substorms.The top row is from the Northern Hemisphere and the bottom, the Southern.Numbers around the edges of each plot denote hours of MLT, and each plot shows data 0°-40° from the pole. Figure 2 . Figure 2. Plots showing the standard deviation in the same format as Figure 1. Figure 3 . Figure 3. Plots showing P(0.2) for (a, b) quiet times, (c, d) storm times, (e, f) quiet-time substorms, and (g, h) storm-time substorms.The top row is for positive current and the bottom row is for negative current. Figure 4 . Figure 4. Plots showing P(1.0) in the same format as Figure 3. Figure 5 . Figure 5. Key to interpreting Zones A and B, used in discussion of Figures 4-10. Figure 6 . Figure 6.Plots showing P(4.0) in the same format as Figure 3. Figure 7 . Figure 7. Plots showing the relative integrated probability for each of the maps presented earlier in Section 4. The relative integrals are computed for positive (red) and negative current (blue) for (left) P(0.2), (center) P(1.0), and (right) P(4.0).For more details, see the text. Figure 8 . Figure 8. Plots for storm times showing (a, b) P(0.2), (c, d) P(1.0), (e, f) P(2.0), and (g, h) P(4.0).The top row is for positive current and the bottom row is for negative current.The parameters are plotted in R1-R2 coordinates(Milan, 2019).Note that the color scales are different for each column. Figure 9 . Figure 9. Plots for quiet-time substorms in the same format as Figure 8. Figure 10 . Figure 10.Plots for storm-time substorms in the same format as Figure 8.
2023-12-16T17:04:33.232Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "030e31b77a93ad4d6d6b030dbef035349f8ed926", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1029/2023JA031946", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "37a90d51dd8f2b407bca789af2ff284d83ce0b4b", "s2fieldsofstudy": [ "Physics", "Geology", "Environmental Science" ], "extfieldsofstudy": [] }
270653015
pes2o/s2orc
v3-fos-license
A review of deep eutectic solvents (DESs), Preparation, Classification, Physicochemical properties, Advantages and disadvantages — Since deep eutectic solvents (DESs) are readily available, inexpensive, highly biodegradable, and easy to synthesize, they are gaining popularity as a green alternative to hazardous organic solvents and traditional ionic liquids (ILs). DES is a mixture of hydrogen bond donor (HBD) and acceptor (HBA) that is viscous and non-water soluble. There are two main penalty areas of DES fabrication. The first is to reduce its operation temperature to lower that of these components. The second is to produce a molten salt with a melting point lower than its components. Therefore, DESs are being utilized more and more in a range of analytical chemical applications. Different extraction techniques have been used for this kind of salt. In this review, production methods, and classification of DESs were discussed. The most physical properties of DESs were demonstrated, including melting point, viscosity, density, surface tension, phase changes in behavior, analyte solubility, instrumental compatibility, and toxicity. It also included the main advantages and disadvantages of DES production. I. INTRODUCTION Deep eutectic solvents (DESs) are molten salt solvents made of HBD and HBA bound by hydrogen bonds in particular proportions and at suitable temperatures, making DES [1][2][3].In ecological analytical chemicals, this new class of green solvents has received a lot of interest lately.This is due to its desirable properties like low-cost industrial, biocompatible, biodegradable, and ease of use compared to ionic liquids and hazardous organic solvents.Due to these benefits, DESs are now widely used instead of traditional organic solvents in the extraction and prior concentration of a wide range of both inorganic and organic compounds [4,5].DESs are becoming one of the most desirable alternatives to toxic chemical solvents [6,7].These are interactions, or solvent eutectic combinations, of several substances connected by hydrogen bonds and their van der Waals forces [8,9].This causes eutectic temperatures that are lower than the melting points of all the compounds that make up EDS [10].The most of these chemicals are nontoxic, eco-friendly extraction solvents [11,12].According to traditional methods, which use harmful organic solvents, an increasing number of advanced, eco-friendly analytical techniques use this new solvent family.Because of its wide range of applications and structural flexibility, DES a rise in interest has recently seen in micro-extraction techniques [13].Depending on the applications, the solvent's hydrophilic / hydrophobic properties can be changed [6].DESs are also referred to as inexpensive ionic liquid substitutes [14,15].An increasing number of novel analytical techniques that are less harmful to the environment than conventional procedures that rely on hazardous organic solvents are using this family of newly developed solvents [11]. II. AN OVERVIEW OF DESS IN BRIEF Initially, the term "eutectic" was used in 1884 to characterize metal alloys with melting temperatures lower than those of their parts [16].This idea led to the definition of a eutectic mixture as a combination of two or three compounds that, in the corresponding phase diagram, show a minimal melting point at a certain molar ratio [17].In 2003, Abbott and colleagues reported the introduction of novel eutectic combinations that have interesting solvent characteristics and are liquid at room temperature [18].Entitled "deep eutectic solvents (DESs)," these solvents were prepared by eutectic amide and quaternary ammonium salt combinations [18].They are frequently a eutectic combination of a hydrogen bond (HBD) donor and an HBA molecule.The point of melting of the DESs decreased in relation to the melting point for each of their individual components as a result of the charge dislocation driven on by hydrogen bonding [19].In addition, combinations maintain their liquid state because of the bonds of hydrogen and van der Waals forces that prevent the original components from crystallizing [20].One of the most significant aspects of these novel solvents is the range of DESs that can be generated by altering their components [20].Because of their low structural energy, and massive nonsymmetic ions, DESs how low melting temperatures.To create DES, tetra ammonium salt is typically mixed with a HBD or metal salt.The reduction in the melting point of the combination relative to the melting points of its constituent parts can be attributed to charge delocalization resulting from hydrogen bonding, such as that between a hydrogen-donor moiety and a halide ion [21].Abbott et al. investigated the freezing points of various quaternary ammonium compounds by heating them with ZnCl 2 in 2001.It was discovered that using choline chloride as the ammonium salt produced the lowest melting point, 23-25 °C [21].Depending on this original study, other liquids consisting of eutectic salt combinations and donors of hydrogen bonds have been produced since.The phrase "deep eutectic solvent" was used to distinguish these liquids from ionic liquids, different anions. III. METHODS OF PREPARATION DESs are made using the 100% atom economy method, as was covered in the beginning.This method only requires combining HBA and HBD; waste disposal and purification were either omitted or not necessary.(I) The most common method for preparing DESs involved heating and constantly stirring HBA and HBD in an inert environment until uniform liquids were created [18].(II) First, the components of DES were dissolved in water in the second process, known as the evaporating method.The water then evaporated at 323 K in a vacuum.The mixture that was produced was maintained in the desiccator until a stable weight was reached.(III) Solid HBA and HBD were injected using a mortar housed in a glove box with an inert nitrogen environment.After that, the mortar was continuously broken down until a clear, uniform liquid was produced [22].(IV) Prior to processing by the freeze-drying procedure, HBD and HBA were dissolved in five weight percent of water.The two aqueous solutions are mixed and then frozen.The mixture was then freeze-dried to yield a consistent, clear liquid [23]. IV. CLASSIFICATIONS OF DEEP EUTECTIC SOLVENTS (DESS) The general formula Cat + X − zY can be used to characterize deep eutectic solvents, where X is a Lewis base, usually a halide anion, and Cat + is, in theory, any ammonium, phosphonium, or sulfonium cation.Between X − and a Lewis or Brønsted acid Y, complex anionic species are generated (z denotes the number of Y molecules that interact with the anion).Imidazolium and quaternary ammonium cations have been the subject of most research, with an emphasis on simpler systems that include choline chloride [ChCl, HOC 2 H 4 N + (CH 3 ) 3 Cl − ] [24].The most common combinations of halide salts and hydrogen-bond donors used to create DESs are shown in Fig. 1 [25].As shown in Table 1, the DESs are separated according to the characteristics of their constituents [20].There have been reports of four primary DES types.There has been some discussion of the potential existence of a fifth kind of DES, but not enough research has been done on it.Type I: Anhydrous metal chlorides and quaternary salts are used to make this kind of DES.Since fewer low-meltingpoint non-hydrated metal halides are used to create type I DESs, there are fewer HBD combinations available for them.Type II: These DESs are made with hydrated metal chlorides and quaternary salts. . Type III: These DESs, also known as HBA and HBD compounds, are created from quaternary salts.Due to the wide variety of HBD and HBA components that are available, there is a possibility of creating a huge number of these distinct DES types with varying chemical and physical characteristics [26].Therefore, by changing one or both of its components, the properties of this family of DESs can be changed [26].Type IV: These DESs are made of HBD compounds and metal chloride. . DESs types III and IV may be either water-immiscible or water-miscible based on the makeup of their constituents, whereas DESs types I and II are all water-miscible [27].2019 saw the proposal by Coutinho et al. for a type V DES composed of non-ionic substances like menthol and thymol [28].Table 1 contains examples of several DES kinds as well as a general formula.Melting Point Its melting point is the lowest temperature at which DES may be used as a liquid, and it is far lower than the melting point of the components that make it up.The main factors affecting a DES's melting point are the hydrogen bonds, or van der Waals forces, between the HBD and HBA.The HBA and HBD interacted throughout the eutectic phase due to changes in entropy and lattice energy, resulting in this large melting point drop [29].The alkyl chain structure of HBD and HBA plays a major role in determining the melting point of DESs, according to a study cited by van Osch et al.Most ionic and non-ionic hydrophobic DESs melt at increasing temperatures as the fatty acid HBD's alkyl chain length increases.Furthermore, the melting point of ionic, hydrophobic DESs is significantly influenced by both anions and cations in the ionic components (HBA) [30]. B. Viscosity In the extraction process, solvent viscosity is thought to be a crucial factor, especially for large-scale applications [31].However, When DESs are compared to organic solvents like heptane as an extraction medium; there are still a lot of problems with their viscosity.For instance, the DES solvent viscosity will decrease when moisture is present [32].Typically, the hydrophilic DES system's HBD components are responsible for the largest variations in its viscosity.The viscosity of hydrophilic DESs, including phenol, glycols, and ethylene glycol, is reduced.On the other hand, the hydrophilic DESs based on choline chloride and composed of urea, polycarboxylic acids, or sugars have a medium to high viscosity [29]. C. Density Density has a big impact on how extraction operations are prepared and which solvents are selected.The degree of interaction between its constituent parts and the molecular packing has a significant impact on the density of DES.This is why the majority of hydrophobic DESs are preferred in the sample pre-treatment procedure since their densities are either the same or less than that of water [29].Ibrahim et al.'s research shows that temperature affects the density of the DESs.[33].Martins et al. found that when the alkyl chain length of the monocarboxylic acid grows, the density of DESs containing menthol and monocarboxylic acid drops.Compared to menthol-based DESs, thymol-based DESs have a greater density based on the length of the alkyl chain of monocarboxylic acid [34].Furthermore, Guo et al.'s study discovered that the density of the generated DES was influenced by the HBA to HBD mole ratios [35]. D. Surface Tension As the energy needed to increase the surface area per unit area, surface tension is a basic fluid property.This energy is produced at the contact by intermolecular forces [36].Surface tension affects DES impact in mass transfer interfacial processes [37].Temperature and the intensity of DES intermolecular interactions both have an impact on surface tension [38].Most DESs have a surface tension that is higher than that of traditional solvents [39].The interactions between HBD and HBA play a significant role in DES surface tension.The strength of the connections between HBD and HBA affects the surface tension of DESs [40].Table 2 shows a few examples of physical characteristics for several DES types at the eutectic composition at 298 K, in addition to a comparison to ionic liquids containing discrete anions and certain molecular solvents.Conductivities compared to other ionic liquids and molecular solvents, DESs have relatively high viscosities and low.The large ion sizes and relatively free volume in the ionic systems are thought to be the cause of this discrepancy.The big ion sizes and comparatively huge free volume in the ionic systems have been proposed as the causes of this discrepancy.The viscosities of ionic liquids are substantially higher than those of the most common molecular solvents.It demonstrates that the activation energy of viscous flow has vast values and that the viscosity exhibits an Arrhenius tendency with temperature. E. Phase Changes in Behavior The difference, ΔTf, between the freezing point at the eutectic composition of a binary mixture of A and B and that of a theoretical ideal mixture is independent of the degree of interaction between A and B. ΔTf will be larger, the greater the interaction.Fig. 2 depicts this schematically.Let's begin with type I eutectics: Different metal halides will interact with the halide anion from the quaternary ammonium salt to make identical halometallate classes with related enthalpies of production.ΔTf values, therefore, ought to be in the range of 200 and 300 °C.It has been discovered that for a eutectic to occur at ambient temperature, a metal halide's melting point often needs to be 300 °C or below.Thus, it is simple to understand why metal halides like AlCl 3 (mp=193°C) [42], FeCl 3 (308) [24], SnCl 2 (247) [42], ZnCl 2 (290) [43], InCl 3 (586) [44], CuCl (423) [45], and GaCl 3 (78) [46] produce eutectics at ambient temperature.Even though they haven't been studied yet, metal salts like SbCl 3 (mp = 73°C), BeCl 2 (415), BiCl 3 (315), PbBr 2 (371), HgCl 2 (277), and TeCl 2 (208) may also be expected to produce ambient temperature eutectics.Conversely, fewer symmetrical cations, which have a lower melting point cause lower melting point eutectics in quaternary ammonium compounds.To add more metals to the DES formulations, type II eutectics were created.It was discovered that the related anhydrous salt had a melting point higher than that of metal halide hydrates.It is apparent that the waters of hydration lower the melting point of metal salts by lowering the lattice energy.A lower melting point of the pure metal salt (ΔTf) will result in a lesser depression of the freezing point (ΔTf), as Fig. 2 illustrates [47]. F. Analyte Solubility For an increased in commercial activity, especially in the pharmaceutical and drug analytical fields, solubility is a crucial physicochemical feature [51].DES solvents have better solubilizing properties than traditional solvents [52].According to published reports, these solvents have potent solubilization properties for both polar and weak-polar materials, including pharmaceuticals, metal oxides, carbon dioxide, and elemental species including cadmium, lead, and mercury [53]. G. Instrumental Compatibility DESs are advantageous in extraction operations since they don't come into contact with detection techniques, which are frequently reported for these solvents and their constituents.Numerous experimental detection techniques, including HPLC, have demonstrated high compatibility with many deep eutectic solvents [39].Consequently, new combinations utilizing enhanced separation methods (such as HPLC) and employing a particular amount of DES in the mobile phase are being created [54,55]. H. Toxicity Especially toxicity profiling is essential for finding new pharmaceutical compounds.Regarding this, the majority of research on the toxicity of DES has focused on in vitro studies in rats, bacterial cells, and cell lines from humans in culture; however, the literature also involves fungal cells, plants, fish, and invertebrates [56].The results presented here clearly indicate that the statement of green DESs is not entirely accurate, and it is best to avoid drawing such broad inferences [57].These chemical compositions, concentrations, and viscosities of DESs determine their cytotoxicity.For example, heavy metals make type III DESs less dangerous than type I, II, and IV DESs [58].The toxicity of DESs is still up for disagreement.Furthermore, due to the absence of research on the topic, there have been disagreements on the biological properties of such solvents [57].Theoretical and experimental understanding of DES toxicity will probably grow in the future. VI. ADVANTAGES AND DISADVANTAGES OF THE DEEP EUTECTIC SOLVENTS (DESS) Eutectic compounds, or DESs, are composed of two or more constituents joined by hydrogen bonds or other non-covalent bonds [59].DES solvents are not considered unique compounds but rather mix in general [60].DESs could be created at a minimal cost and with excellent biodegradability [61].DESs can be easily designed while adjusted.Nevertheless, DESs' advantages over VOCs do not imply that they are faultless.In actuality, these substances possess numerous flaws, including high hygroscopicity, flammability, and low to average stability, and significant variability.These problems could not be disregarded, but the good design might allow us overcome or greatly improve them. Instability: The volatility influences the bulk loss of DESs from TGA and its thermal decomposition. VII. ADVANTAGE • Hygroscopicity and air exposure: DESs must to contact to further absorb airborne substances such as O 2 , N 2 , water, CO 2 , and SO 2 . • Cost and synthesis: For a genuine industrial process, the DESs' cost is crucial.In general, the formation components determine the cost of DESs [64][65][66]. •Usage of energy. • There is a chance the DESs will eventually include impurities.The sources of pollutants in DESs could be raw materials used in their synthesis.Owing to the air's solubility in water, oxygen, and other gases, as well as the breakdown of DESs in those elements, DESs may become contaminated by the atmosphere [67,68]. CONFLICT OF INTEREST Authors declare that they have no conflict of interest. Fig. 1 . Fig. 1.The DES synthesis procedure utilizes the most common combinations of hydrogen-bond donors and halide salts Fig. 2 . Fig. 2. Schematic representation of a eutectic point on a two-component phase diagram TABLE 3. Temperatures of freezing point on a variety of DESs TABLE 3 . Temperatures of freezing point on a variety of DESs
2024-06-22T15:43:00.392Z
2024-06-19T00:00:00.000
{ "year": 2024, "sha1": "542e136537988b4e30e622e2cdeb5c0268627442", "oa_license": "CCBY", "oa_url": "https://doi.org/10.32792/utq/utjsci/v11i1.1208", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "bf0b89774a6b261b83f6b701d3596d0714c53448", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [] }
118340325
pes2o/s2orc
v3-fos-license
Non-Perturbative Functional Renormalization Group for Random Field Models and Related Disordered Systems. I: Effective Average Action Formalism We have developed a nonperturbative functional renormalization group approach for random field models and related disordered systems for which, due to the existence of many metastable states, conventional perturbation theory often fails. The approach combines an exact renormalization group equation for the effective average action with a nonperturbative approximation scheme based on a description of the probability distribution of the renormalized disorder through its cumulants. For the random field $O(N)$ model, the minimal truncation within this scheme is shown to reproduce the known perturbative results in the appropriate limits, near the upper and lower critical dimensions and at large number $N$ of components, while providing a unified nonperturbative description of the full $(N,d)$ plane, where $d$ is the spatial dimension. I. INTRODUCTION The effect of quenched disorder on the long-distance physics of many-body systems largely remains an unsettled question despite decades of intensive research. Ongoing controversies persist for instance on the equilibrium and out-of-equilibrium behavior of spin glasses and systems coupled to a random field. 1,2 Even though progress has been made, it has so far proven difficult to construct a proper renormalization group (RG) approach providing a description of ordering transitions and criticality in these systems. A technical reason for this unsatisfactory situation is that quenched disorder makes the system intrinsincally inhomogeneous and that one should in principle follow the renormalization of the whole probability distribution of the disorder. A physical reason is that the presence of disorder and of the resulting spatial inhomogeneity lead, for at least some range of the control parameters, to multiple "metastable states". (At this point we use the term "metastable state" in a loose acceptance to describe configurations that minimize some energy or free-energy, action or effective action in field-theoretical terminology, but differ from the true ground state.) How such metastable states evolve upon coarse-graining under RG then represents the central issue: at large lengthscale, their influence could vanish, leaving only benign signatures in the thermodynamics, or else it could modify the critical behavior of the system, the nature of its phases, and, often in an even more spectacular way, the relaxation and out-of-equilibrium dynamical properties. A well known example of the kind of puzzles associated with quenched disorder and metastable states is the failure of the so-called "dimensional reduction" property in the random field Ising model (RFIM). 3,4,5,6 Standard perturbation theory predicts to all orders that the critical behavior of the RFIM in dimension d is the same as that of the pure Ising model, i.e., in the absence of random field, in two dimensions less, d − 2. The property has been shown in a compact and elegant manner by Parisi and Sourlas 7 by means of a supersymmetric formalism. However, dimensional reduction predicts a lower critical dimension for ferromagnetism in the RFIM of d lc = 3, in contradiction with rigorous results. 8,9 The dimensional reduction property must therefore break down in low enough dimension. The supersymmetric approach gives a hint at the origin of the breakdown, which appears to be related, yet in a somewhat obscure way, to the presence of multiple metastable states 10 (in this case, local minima of the Hamiltonian). Over the years, and on top of numerous computer simulations and scarce exact analytical results, theoretical approaches have been devised to cope with disordered systems characterized by multiple metastable states, such as spin glasses and random field models. 1 To list the main ones, we mention: (i) phenomenological approaches such as the heuristic domain-wall arguments 11,12 and the "droplet" description, 13,14,15 in which one directly focuses on rare excitations and the associated low-energy metastable states; (ii) mean-field theories, combined with the replica formalism in order to handle the average over disorder; for models with spin-glass ordering, the potentially dramatic effect of the metastable states is captured through a spontaneous breaking of the replica symmetry; 1,2, 16,17 (iii) specific RG techniques for low-dimensional (d = 1, 2) systems, as for instance the Coulomb gas RG approach for two-dimensional disordered XY models 18,19 or real space RG for strongly disordered one-dimensional systems; 20,21,22 (iv) the perturbative functional RG for energydominated disordered models considered in the vicinity of a critical dimension at which the fundamental fields are dimensionless; 23,24,25,26,27,28 one must then follow the flow of a whole function, an appropriate renormalized cumulant of the disorder. As shown first by Fisher 28 for an elastic manifold pinned by a random potential, the long-distance physics is controlled by a zero-temperature fixed point at which the renormalized cumulant is a nonanalytic function of the fields, with the nonanalyticity encoding the effect of the many metastable states at zero temperature. All these approaches, however, are either questionable or not easily generalizable: on the one hand, the phenomenological approaches lack rigorous foundations and the relevance of mean-field descriptions to finitedimensional systems is, to say the least, far from garanteed; on the other hand, the perturbative functional RG becomes extremely complex, and soon untractable in practice for random field systems when going beyond one-loop calculations; 29,30,31 moreover, it does not allow one to study the RFIM (as for specific RG techniques, they are not extendable by construction). The purpose of the present work, described here and in a companion paper 32 is therefore to propose a general theoretical framework that leads to a consistent description of the equilibrium behavior of the random field models and related disordered systems. To achieve this, we rely on a version of Wilson's continuous RG via momentum shell integration. 33 Under various terminologies, "Exact RG", "Functional RG", and "Nonperturbative RG", it has been developed in the past 15 years to become a powerful method for investigating both universal and nonuniversal properties in Statistical Physics and Quantum Field Theory. 34,35,36,37,38 The approach is "exact" in the sense that the RG flow associated with the progressive account of the field fluctuations over larger and larger lengthscales is described through an exact functional differential equation. It is "functional" because through the exact equation, one follows the flow of an infinite hierarchy of functions of the fields in place of simply coupling constants. It is "nonperturbative" (beyond the mere tautology that an exact description automatically includes all perturbative as well as nonperturbative effects) because it lends itself to efficient approximation schemes that are able to capture genuine nonperturbative phenomena: 36 to name a few, in the case of the standard O(N ) scalar model, (numerically) tractable approximations describe the Kosterlitz-Thouless transition of the XY model in d = 2, known to be associated with the binding/unbinding of topological defects (vortices), as well as the convexity property of the thermodynamic potential in case of spontaneous symmetry breaking, recovered in other treatments through nonperturbative configurations like instantons. To study the problem at hand, we combine the ideas of the perturbative functional RG for disordered systems with the general formalism of the exact/functional/nonperturbative RG. In the following, we shall denote our approach nonperturbative functional RG (NP-FRG). It provides a framework to study both perturbative and nonperturbative effects in any spatial dimension d and for any number of components of the fundamental fields, N . We exclude from the scope of the present series of articles relaxation and out-ofequilibrium dynamic phenomena, as well as spin glass ordering. We also postpone to a forthcoming publication the development of the NP-FRG in a superfield formalism able to directly address the failure of supersymmetry in connection with that of dimensional reduction. Short versions of the present work have appeared in Refs. [30,39]. The present paper is organized as follows. In section II we present the models and the formalism. We first introduce the models and discuss their physical relevance and the main open questions. From the corresponding replica field theories, we then derive the exact RG equation for the effective average action, which is the generating functional of the one-particle irreducible correlation functions at the running scale. We next relate the replica formalism, in which the replica symmetry is explicitly broken through the application of sources, to the cumulants of the renormalized disorder. We close the section by writing down the exact RG flow equations for these cumulants. In section III, we introduce a systematic nonperturbative approximation scheme. After first discussing the symmetries of the problem and the way to implement them in the effective average action formalism, we introduce the nonperturbative truncation scheme of the exact RG equation: it relies on (i) an expansion in cumulants of the disorder and (ii) a well tested approximation of the nonperturbative RG, the "derivative expansion", which uses the fact that the relevant physics is dominated by long wavelength modes to perform an expansion in the number of spatial derivatives of the fundamental fields. Finally, we detail the minimal truncation that we use in our numerical investigation of the random field O(N ) model (RFO(N )M). In section IV, we specialize the formalism to the study of the RFO(N )M. We introduce the scaling dimensions suitable to a search for the putative zero-temperature fixed point controlling the ordering transition. We first consider the case of the RFIM and then extend our description to the RFO(N )M. With the help of these dimensions, the RG flow equations are then cast in a scaled form. We also briefly comment on possible application to other disordered systems. We next discuss in section V an important property of the truncations previously described: because of the oneloop structure of the exact flow equations and of the appropriate choice of the approximations, one recovers the perturbative results both near the upper critical dimension, d uc = 6, and in the N → ∞ limit of the RFO(N )M. Even more interestingly, we also show that our minimal truncation near the lower critical dimension for ferromagnetism of the RFO(N > 1)M, d lc = 4, reduces to the perturbative functional RG result (at one loop) obtained from the nonlinear sigma version of the model. 23 To the least, the truncated NP-FRG thus provides a nonperturbative interpolation in the whole (N, d) plane of the known perturbative results near d = 4, d = 6, as well as N → ∞. Finally, the presentation and discussion of the results obtained for the RFO(N )M within the present NP-FRG approach will be described in the companion paper. 32 A. Models We focus on the equilibrium, long-distance behavior of a class of disordered models in which N -component classical variables with O(N ) symmetric interactions are coupled to a random field. Depending on whether the coupling is linear or bilinear, the models belong to the "random field" (RF) or the "random anisotropy" (RA) subclasses. Such models with N = 1, 2, or 3 are relevant to describe a variety of systems encountered in condensed matter physics or physical chemistry. To name a few, one can mention dilute antiferromagnets in a uniform magnetic field, 40 critical fluids and binary mixtures in aerogels (both systems being modelled by the N = 1 RF Ising model), 41,42,43 vortex phases in disordered type-II superconductors (described in terms of an elastic glass model whose simplest version is the N = 2 RF XY model), 44,45,46 amorphous magnets, such as alloys of rareearth compounds, 47,48 and nematic liquid crystals in disordered porous media (described by N = 2 or N = 3 RA models). 49 Other related models can be described as well within the same formalism, but will only be alluded to: the "random elastic" model describing an elastic system, such as an interface or a vortex lattice, pinned by the presence of impurities; the "random temperature" model associated with impurity-generated bond or site dilution in a ferromagnetic Ising model. For reasons that will become clear further down in this section, we exclude from the present study spin glass ordering and we rather concentrate on ferromagnetic ordering (in which the O(N ) symmetry is spontaneously broken) or "quasi-ordering" (phases with quasi-long range order). Our starting point is the field theoretical (coarsegrained) description of the systems in terms of an Ncomponent scalar field χ(x) in a d-dimensional space and an effective Hamiltonian, or bare action, where x ≡ d d x and the superscript µ spans the N components of the field; h(x) is a random magnetic field and τ (x) a second-rank random anisotropy tensor, which are both taken for simplicity (see also discussion below) with gaussian distributions characterized by zero means and variances given by where the overbar generically denotes the average over quenched disorder. Higher-order random anisotropies could be included as well. They will indeed be generated along the RG flow. However, for symmetry reasons, when starting with only a second-rank, or more generally an even-rank, random anisotropy, only even-rank anisotropies are generated: this corresponds to what is called the random anisotropy (RA) model. The model with a nonzero ∆, for which anisotropies of both odd and even ranks are generated under RG flow, is the random field (RF) model. The equilibrium properties of the model are obtained from the average over disorder of the logarithm of the partition function, where J(x) is a source linearly coupled to the fundamental field and a (ultra-violet) momentum cutoff Λ, associated with an inverse microscopic lengthscale such as a lattice spacing, is implicitly considered in the functional integration over the field. With this definition however, the partition function and the corresponding thermodynamic potential W [J] = lnZ[J] are still functionals of the random fields: W [J] ≡ W [J; h, τ ]. As is well known from the theory of systems with quenched disorder, the thermodynamics is given by the average over disorder of the "free energy", i.e., Full information on the system, in particular an access to the correlation (Green) functions of the field, requires knowledge of the higher moments of W [J], viewed as a random functional. 74 As will be discussed more thoroughly further below, such information can be conveniently extracted by using the replica formalism whose starting point is the replacement of lnZ by the limit of (Z n − 1)/n when n, the number of replicas of the original system, goes to zero. Quite differently from the standard but controversial use of this replica trick, in which the analytic continuation for n < 1 opens the possibility of a spontaneous breaking of the replica symmetry, 16 we will consider an a priori more benign procedure in which the symmetry between replicas is explicitly broken by the introduction of external sources acting on each replica independently. This procedure will allow us to generate the cumulant expansion of the disorder-dependent functional W [J]. Within the replica formalism, the original problem is replaced by one with n replica fields {χ a (x)}, a = 1, 2, · · · , n, and the "replicated action", obtained after explicitly performing the average over the disorder in the partition function, reads: where the linear sources J a (x), a = 1, 2, · · · , n, act on each replica separately. Associated to this partition function is the generating functional of the connected Green functions, W n [{J a }] = ln Z n [{J a }], and the effective action, Γ n [{φ a }], defined through a Legendre transform: (8) the fields {φ a } and the sources {J a } being related by where X represents the average of X with the weight given in Eq. (7), and . (9b) The effective action is the generating functional of the one-particle irreducible (1 − P I) correlation functions or proper vertices. The formalism we are about to describe also applies to extensions of the replicated action of Eq. (6) that can be cast in the form where the subscript Λ recalls that the various terms are at their bare value, defined at the microscopic scale Λ, and the dots indicate possible functions involving higher numbers of replicas. The functions U Λ , V Λ , · · · satisfy the O(N ) symmetry as well as the S n permutational symmetry between replicas. Eq. (1) is obviously a special case of the above expression, and higher-order anisotropies are included in a 2-replica term which is only function of χ a (x) · χ b (x). RF and RA O(N ) models with nongaussian distributions of the random fields and anisotropies are described by terms involving higher number of replicas. (Note that the RA O(N ) model is defined as such for N > 1; the Ising case, N = 1, corresponds to another model, the random temperature one introduced hereafter.) Other disordered systems are also described by the form of the replicated action in Eq. (10). For instance, the random temperature model corresponds to Eq. (10) with U Λ and V Λ functions of the fields only through the O(N ) invariants ρ a = 1 2 |χ a | 2 , ρ b = 1 2 |χ b | 2 . In the RF, RA, and random temperature models, the 1-replica part of the bare action simply describes n copies of the standard ferromagnetic O(N ) model without disorder. The random elastic model is also a special case of Eq. (10). However, contrary to the models just discussed, the 1-replica potential U Λ is absent (or reduced to a purely quadratic term), so that there is no mechanism triggering a paramagnetic-ferromagnetic phase transition. The 2-replica potential V Λ , which is the second cumulant of a random pinning potential, is now function of only the difference between the two replica fields, χ a (x) − χ b (x). As a result, the model has an additional symmetry, the statistical tilt symmetry, 50 which garantees that the 1-replica part of the action, including the kinetic term, is not renormalized: the effective action has thus the same 1-replica part as the bare one. (Note that, as shown in Ref. [30] and in the companion paper, 32 the random elastic model, albeit with an underlying periodicity, also emerges as a low-disorder approximation of the RF and RA XY (N = 2) models.) B. Exact RG equation for the effective average action The exact RG in the effective average action formalism 34,36,51 relates the bare action, here Eq. (10), to the full effective action, Eq. (8), through a progressive inclusion of fluctuations of longer and longer wavelength. To do so, one introduces an infrared regulator, characterized by a scale k, which, in the functional integration leading to the partition function, suppresses the contribution of the low-energy modes with momentum |q| < ∼ k while including the high-energy modes with |q| > ∼ k. After Legendre transformation, this defines an "effective average action" at the running scale k, Γ k , which continuously interpolates between the microscopic scale k = Λ, at which Γ k=Λ reduces to the bare action, and the macro-scopic one, k = 0, at which Γ k=0 equals the full effective action. More precisely in the present context, a "mass-like" quadratic term is added to the bare action, Eq. (10), where q ≡ d d q/(2π) d ; R µν k,ab (q 2 ) denotes infrared cutoff functions which, in order to enforce that the additional term satisfies the same O(N ) and S n symmetries as the bare action (see above), must take the following form: The cutoff functions R k (q 2 ) and R k (q 2 ) are chosen such as to realize the decoupling of the low-and highmomentum modes at the scale k: for this, they must decrease sufficiently fast for large momentum |q| ≫ k and go to a constant value (a "mass") for small momentum |q| ≪ k. The presence of an off-diagonal component R k (q 2 ) is somewhat unusual and will be discussed later on. The cutoff functions must also satisfy the two constraints that (i) they go to zero when k → 0, so that one indeed recovers the full effective action with all modes accounted for, and (ii) R k (q 2 ) diverges while R k (q 2 ) stays finite when k → Λ, so that the effective average action does reduce to the bare action. (In what follows we are only concerned with the long-distance behavior of the models and do not pay attention to microscopic details; we thus let Λ go to ∞ in the cutoff functions.) Different choices have been proposed and tested in the recent literature. Standard choices for R k (q 2 ) are of the form with Z k a field renormalization constant yet to be specified and r(y) = y −1 (1 − y)Θ(1 − y), 52 where Θ is the Heaviside function, or r(y) = (e y − 1) −1 . 51 From the partition function Z k [{J a }] obtained from the bare action supplemented with the k-dependent regulator, Eq. (11), one defines the generating functional of the Green and, through a Legendre transform, one has access to the effective average action at the running scale k, Γ k : where the fields {φ a } and the sources {J a } are related by the (k-dependent) expression The Legendre transform is slightly modified by the addition of the last in Eq. (14), which ensures that the effective average action Γ k does reduce to the bare action at the microscopic scale, with no contribution from the infrared regulator. This addition does not change the behavior in the k → 0 limit since the regulator goes identically to zero. Physically, and to use the language of magnetic systems, the effective average action is a coarsegrained Gibbs free energy. It is the generating functional of the 1 − P I correlation functions from which one can derive all Green functions of the modified system at the scale k. Note that here and in the following we omit the subscript n associated to the number of replicas in order to simplify the notations. The evolution of the effective average action with the infrared cutoff k is governed by an exact flow equation, where the trace involves a sum over both replica indices and N -vector components; R k (q 2 ) is defined in Eq. (12) and Γ (2) k is the tensor formed by the second functional derivatives of Γ k with respect to the fields φ µ a (q): . The above RG flow equation is a complicated functional integro-differential equation that cannot be solved exactly in general; but, due to its one-loop structure and its reasonably transparent physical content, it provides a convenient starting point for nonperturbative approximation schemes. At this point, it is quite clear to see why we have excluded spin glass ordering from our considerations. The quadratic form of the infrared regulator in Eq. (11) suppresses the fluctuations of the low-momentum modes of the fundamental fields χ a . Spin glass ordering on the other hand involves fluctuations of composite fields, associated e.g. to the "overlap" between different replicas. 16 Proper RG treatment of such fluctuations implies to introduce a "mass-like" regulator for composite fields, i.e., in the simplest case a functional that is quartic in the fundamental fields instead of the quadratic term used here. We do not consider this case in the present work. C. Explicit replica symmetry breaking and cumulants of the renormalized disorder Among the technical difficulties encountered when making use of the exact RG equation, Eq. (16), there is one which is specific to disordered systems and to the present replica formalism: one must invert the matrix Γ (2) k,ab + R k,ab for arbitrary replica fields (since all replicas are different due to the independently applied sources). Before delving into this problem, it is worth giving some physical insight into the meaning of the explicit replica symmetry breaking used here. As discussed in section II-A, after full account of the fluctuations, the bare disorder is renormalized to a full random ("free energy") functional W [J], which, to make its dependence on the bare quenched disorder explicit, we now denote W [J; h]. This random object can be characterized by the infinite set of its cumulants, etc... The first cumulant W 1 gives access to the thermodynamics of the system and the higher-order cumulants describe the distribution of the renormalized disorder (we define, as in the bare action, a disorder with zero mean). Note that by construction the cumulants are invariant under permutations of their arguments. The cumulants can be generated from an average involving copies, or "replicas", of the original disordered system, as follows: where the n copies have the same bare disorder but are coupled to different external sources. To fully characterize the random functional W [J; h], it is indeed important to describe its cumulants for generic arguments, i.e., for different sources. (Be aware that the subscripts 1, 2, ... used to denote the cumulants of W should not be confused with the subscript n denoting the number of replicas in section II-A and omitted since: here for instance, W 1 denotes the 1-replica component, corresponding to the first cumulant, whereas with the previous notation W n=1 is given by the sum of all cumulants with all there arguments equal.) A convenient trick to extract the cumulants with their full functional dependence is to let the number of replicas be arbitrary and to view the expansion in the right-hand side of Eq. (20) as an expansion in increasing number of "free", or unconstrained, sums over replicas of the functional W [{J a }] defined below Eq. (7). The term of order p in the expansion is a sum over p replica indices of a functional depending exactly on p replica sources, this functional being precisely equal here to the pth cumulant of W [J; h]. This procedure, which rests on an explicit breaking of the replica symmetry and an analytic continuation to arbitrary numbers of replicas (including the limit n → 0 previously introduced), is a priori different from the standard use of replicas, in which all sources are equal, and it avoids the delicate handling of a spontaneous replica symmetry breaking. 1,2,16,17 It has been used in a similar context by Le Doussal and Wiese. 53,54 The practical implementation of the expansion in free replica sums will be detailed in the next subsection. In our present NP-FRG approach however, the central object is the effective action Γ, not W . The expansion of Γ[{φ a }] in increasing number of free replica sums reads with and the second-order terms is given by where J[φ] is the nonrandom source defined via the inverse of the Legendre transform relation in Eq. (22), i.e., etc..., where perm(123) denotes the two additional terms obtained by circular permutations of the fields φ 1 , φ 2 , φ 3 and where we have used the following short-hand notation: etc. Note that for clarity the O(N ) indices have been omitted in the above expressions. We point out that Γ p [φ 1 , ..., φ p ] for p ≥ 3 cannot be directly taken as the pth cumulant of a physically accessible random functional, in particular not of the disorderdependent Legendre transform of W [J; h] (although it can certainly be expressed in terms of such cumulants of order equal or lower than p). In the following and by abuse of language, we will nonetheless generically call the Γ p 's "cumulants of the renormalized disorder" (which is true for p = 2). In complement to the above picture and more specifically for random field systems, it is also interesting to introduce a renormalized random field (or random force) h[φ](x) defined as the derivative of a random free-energy functional, and whose first moment is equal to zero by construction. It is easy to derive that its pth cumulant (p ≥ 2) is given by the derivative with respect to φ 1 , ..., , which can then be related to derivatives of Γ 2 , Γ 3 , ...; for instance, where have used a short-hand notation similar to that of Eqs. (26,27) and omitted the N -vector indices for simplicity. Terms of order 3 and higher are again given by more complicated expressions. We close this discussion by noticing that in the simpler case of the random manifold model, Γ 1 and W 1 being trivial and unrenormalized due to the statistical tilt symmetry (see above), J[φ] has a simple explicit expression. For instance, if the bare action has a quadratic 1-replica term, Γ 1 [φ] is equal to this quadratic functional and J[φ] is a known linear functional of φ, which further simplifies when considering uniform fields. This allows one to devise ways to directly measure the second cumulant of the renormalized disorder. 55,56 Nothing similar occurs in random field and random anisotropy models: the thermodynamics of such systems being highly nontrivial (with a phase transition and a critical point), the expression of J[φ] is involved and a priori unknown. D. Exact RG equations for the renormalized disorder cumulants The reasoning developed in the previous subsection can be applied to the effective average action Γ k and its expansion in free replica sums. As a results, Eqs. (18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29) can be extended to any running scale k. Yet, to make the expansion in free replica sums an operational procedure, one needs be able to perform systematic algebraic manipulations, as for instance the inversion of the matrix appearing in the right-hand side of the exact RG equation, Eq. (16). We detail here the method for matrices depending on two replica indices, but functionals of the n replica fields. Extension to higher-order tensors is presented in Ref. [54]. where we have again denoted {φ f } the n replica fields to avoid confusion in the indices, can be decomposed as In the above expression, it is understood that the second term A ab no longer contains any Kronecker symbol. Each component can now be expanded in increasing number of free replica sums, where the superscripts in square brackets denote the order in the expansion (and should not be confused with superscripts in parentheses indicating partial derivatives). As an illustration, the expansion of the matrix Γ (2) k defined in Eq. (17) reads, in terms of the expansion of effective average action itself, where the permutational symmetry of the arguments of the Γ k,p 's has been used. Algebraic manipulations on such matrices can be performed by term-by-term identification of the orders of the expansions. For instance, the inverse B = A −1 of the matrix A can also be put in the form of Eq. (30) and its components, B a and B ab , expanded in number of free replica sums. The term-by-term identification of the condition A · B = 1 leads to a unique expression of the various orders, B [p] and B [p] , of the expansion of B in terms of the A [q] 's and A [q] 's with q ≤ p. The algebra becomes rapidly tedious, but the first few terms are easily derived: etc. We can apply the above procedure to the exact RG equation for the effective average action. For convenience, we introduce the modified propagator at the scale k, with where P k,a and P k,ab are still tensors with respect to momenta and vector component indices. Eq. (16) then leads to an infinite hierarchy of flow equations for the cumulants of the renormalized disorder, and so on, where tr indicates a trace over N -vector components and perm(12) denotes the expression obtained by permuting φ 1 and φ 2 . (Some care is needed in the term by term identification in order to properly symmetrize the expressions and satisfy the permutational property of the various arguments of the cumulants.) Expressing the higher-order terms P k and the derivatives of the Γ k,p 's and introducing the short-hand notation ∂ k to indicate a derivative acting only on the cutoff functions, i.e., ∂ k ≡ ∂ k R k δ/δ R k + ∂ k R k δ/δ R k , Eq. (42) can be rewritten and similarly for higher-order cumulants , where 1 µν qq ′ = (2π) d δ(q + q ′ )δ µν and the trace T r is now over both momenta and and N -vector components; the modified propagators P k are explicitly given by This provides a hierachy of exact RG equations for the cumulants of the renormalized disorder (including the first one which leads to a description of the thermodynamics). One should note that (i) the cumulants are functional of the fields and contain full information on the complete set of 1 − P I correlation functions and (ii) the flow equations are coupled, the (p + 1)th cumulant appearing in the right-hand side of the equation for the pth cumulant. As such these RG equations remain untractable and their resolution requires approximations. III. NONPERTURBATIVE APPROXIMATION SCHEME A. Symmetries in the effective average action formalism When writing the RG flow for the effective average action and when devising an approximation scheme to solve it, one should as far as possible make sure that the symmetries of the theory are not explicitly violated at any scale. Such a requirement is easily implemented as far as elementary symmetries, such as invariance by translation and rotation in Euclidean space, O(N ) symmetry, and S n replica permutational symmetry, are concerned: the infrared regulator ∆S k added to the bare action must be chosen such that it is invariant under the appropriate transformations, which is indeed garanteed by the expressions in Eqs. (11,12). The exact effective average action at any scale k then also possesses the symmetries of the bare action, and one just had to be careful that the truncations do not explicitly break the symmetries, which is easily implemented. 36 A similar treatment can be applied to most additional symmetries of the disordered systems under consideration. For instance, the "statistical tilt symmetry" of the random manifold model is easily extended to a kdependent statistical tilt symmetry with any regulator of the form given in Eqs. (11,12), which implies that the 1-replica part (first cumulant) of the effective average action is unrenormalized along the flow. Similarly, the additional inversion symmetries of the random anisotropy (χ a · χ b → −χ a · χ b ) and the random tem- ily accounted for with the choice R k ≡ 0. Truncation schemes naturally follow. Taking into account the underlying supersymmetry that characterizes the random field model for a gaussian distribution of the random field 7 is much more involved. First, because one knows that the supersymmetry, which goes with the dimensional reduction property, must be broken in low enough dimension (at least, in d = 3), so that, even if the RG flow is started with an initial condition obeying supersymmetry, a mechanism should be provided to describe a spontaneous breaking of the supersymmetry. Secondly, the supersymmetry shows up in a superfield formalism built with auxilliary fermionic and bosonic fields, but it is far from transparent in the present framework based on the fundamental fields. (This is true already at the level of the initial condition of the RG flow.) We shall therefore defer the proper resolution of this problem to a forthcoming publication. 57 Note that an underlying supersymmetry is also present in the random manifold model, where it also leads to the d → d − 2 dimensional reduction. However, the pure model with no disorder is merely a free field theory, and this is easily accounted for. 58 B. Truncation schemes We have already stressed that solving the exact RG equation for the effective average action requires approximations. The general framework has proven quite versatile for devising efficient and numerically tractable approximations which are able to describe both universal and nonuniversal properties in any spatial dimension and to capture genuine nonperturbative phenomena (see Introduction). Such approximations generally amount to truncating the functional form of the effective average action, which results in a self-consistent flow that preserves the fundamental structure of the theory (as the symmetries, see above). If one is interested in the long-distance physics of a system and in observables at small momenta, a systematic truncation scheme is provided by the so-called "derivative expansion". 34,36 It consists in expanding the effective average action in increasing number of derivatives of the field(s) and retaining only a limited number of terms. The lowest order is the "local potential approximation" (LPA) 59 in which one only considers the flow of the effective average potential, i.e., the effective average action for a uniform field configuration. The field is not renormalized and the associated anomalous dimension is equal to zero. Field renormalization, which is important in the present problem where one expects the anomalous dimension to be quite sizeable in low dimensions (e.g., numerical estimates give η ≃ 0.5 for the RFIM in d = 3), requires to go beyond the LPA and to consider the first order of the derivative expansion. Previous studies on a variety of systems, including the pure O(N ) model, have shown that the system's behavior is quantitatively very well described at this level of approximation. 36,37,60,61 Higher-order terms improve the accuracy, 62,63 but they rapidly become untractable except in simple models. For the disordered systems considered here, one more step is needed. We have seen in section II-C that an expansion in number of free replica sums can be used to generate the cumulants of the renormalized disorder. Keeping only a limited number of terms in the expansion therefore leads to a systematic truncation scheme. To describe both the thermodynamics and the renormalized probability distribution of the disorder, one must consider at least the first two cumulants, or equivalently, the second order in the expansion in free replica sums. Finally, on top of the two previous approximations, it may be useful, and numerically more tractable, to expand the functions appearing in the truncated effective average action in powers of the field considered around a given (uniform) configuration. This configuration can be taken either as zero everywhere or as a nontrivial configuration that minimizes the effective average potential (here, more precisely, its 1-replica component that gives access to the thermodynamics). Again, the accuracy and convergence properties of such field expansions have been widely tested for many different models. In the present case, and for reasons that will become clear later on, field expansions should be used with great caution. C. Minimal truncation Given the general scheme presented above, the choice of a minimal nonperturbative trucation is guided by a combination of factors: experience gained from studies on other models, constraints associated with the symmetries of the full theory, intuition or previous knowledge concerning the physics of the problem at hand, requirement of being able to recover as much as possible exact and perturbative results in the appropriate limits, and of course, a practical limitation coming with the numerical capability to actually solve the set of RG flow equations. As we have already alluded to, a description of the long-distance physics of random field models and related disordered systems at least requires to keep the first two cumulants of the disorder, i.e., the first two terms, Γ k,1 and Γ k,2 , of the expansion of the effective average action in free replica sums. Because of the anticipated nonnegligible value of the anomalous dimension of the field η, one must also include in the description at least the first order of the derivative expansion of the first cumulant Γ k,1 . The resulting truncated functional form of the effective average action then reads where, as before, ρ a (x) = 1 2 |φ a (x)| 2 . In the above expressions, U k (φ 1 ) ≡ U k (ρ 1 ) is the effective average potential, which is equal to the 1-replica component Γ k,1 evaluated for a uniform field and will hereafter be simply denoted the 1-replica potential; V k (φ 1 , φ 2 ) ≡ V k (ρ 1 , ρ 2 , φ 1 · φ 2 ) is the 2-replica potential and is equal to the 2-replica component Γ k,2 evaluated for a uniform field configuration. Physically, U k (φ 1 ) is a coarse-grained Gibbs free energy and V k (φ 1 , φ 2 ) is the second cumulant of the renormalized disorder evaluated for uniform fields (see Eqs. (22,24)). The two terms Z k (ρ 1 ) and Y k (ρ 1 ) correspond to field renormalization functions for the Goldstone and massive modes, respectively. We note in passing that the fact that only the first two cumulants of the disorder have been kept in the truncation does not imply that the probability distribution of the renormalized disorder is actually taken as gaussian. Indeed, as will be discussed in the companion paper, 32 the probability is not gaussian in general. The truncation means that we have neglected the contribution coming from the third cumulant in the RG flow of the second cumulant and have therefore decoupled the hierarchy of flow equations for the cumulants. Being interested in the description of the models in the full (N, d) diagram, we will have recourse to further approximations that make the numerical resolution of the flow equations easier. More specifically, we consider the lowest-order term of the field expansion of the field renormalization functions around a nontrivial configuration, ρ m,k = 1 2 |φ m,k | 2 , which minimizes the 1-replica potential U k (ρ): Y k ≡ 0 and Z k (ρ) ≡ Z m,k , with Z m,k = Z k (ρ m,k ) and U ′ k (ρ m,k ) = 0. Physically, φ m,k is the magnetization (order parameter) at the scale k. (If φ m,k→0 = 0, the system is in an O(N ) symmetric phase whereas if φ m,k→0 = 0, the system is in the phase with broken symmetry.) Z m,k is chosen as the field renormalization in the cutoff function R k (q 2 ) (see Eq. (13)). Finally, we simplify the resulting RG flow equations by setting the off-diagonal cutoff function to zero, R k ≡ 0. As will be shown, this choice leads in general to an explicit breaking of dimensional reduction (despite the fact that the infrared regulators vanish identically when k → 0). In the following paper 32 we shall discuss the way to nonetheless make sense out of the results, the distinction between spurious and real breaking of dimensional reduction being easily characterized. A complete resolution of this issue will be provided when extending the NP-FRG approach to the superfield formalism. 57 With the above approximations which we shall refer to as the minimal trucation, the self-consistent NP-FRG equations can be derived from Eqs. (41)(42)(43). The flows of the 1-and 2-replica potentials read where the trace is over the N -vector components and, due to the O(N ) symmetry, V k (φ 1 , φ 2 ) ≡ V k (ρ 1 , ρ 2 , z) with z = φ 1 ·φ 2 / √ 4ρ 1 ρ 2 ; the (modified) propagator P [0] k (q 2 ; ρ) is given by where µ = 1 is chosen to be the direction of the field φ and therefore corresponds to the massive mode while the (N − 1) remaining components represent the Goldstone modes. The flow of the field renormalization constant Z m,k is obtained from the prescription Z k (ρ) = ∂ q 2 Γ (2) k,1 (q 2 ; ρ) µµ | q 2 =0 with µ chosen as a Goldstone mode (µ = 1) 36 and from the condition U ′ k (ρ m,k ) = 0. It can be explicitly written as where a line denotes the Goldstone propagator and dots represent vertices obtained from derivatives of either the 1-replica potential (single dots) or the 2-replica potential (dots linked by a dashed line); for instance, ¡ represents the three-point vertex Γ . We did not include the graphs containing 4-point vertices because in the truncation considered here, they do not contribute to the flow of Z m,k . From the above flow equation, Eq. (50), one extracts a running anomalous exponent, The initial conditions for the RG flow equations are obtained from the bare action, Eq. (10). The RG flow equations form a closed set of coupled nonlinear integro-differential equations for two functions, U k (ρ 1 ) and V k (ρ 1 , ρ 2 , z), and a constant, Z m,k . The numerical task of solving these equations is still arduous and when needed for reducing the difficulty of the computations, we will also consider truncated expansions of the 1-and 2-replica potentials in some or all of their field arguments (see below). The present approach represents a nonperturbative but of course approximate RG description. Already at the minimal truncation discussed above, one includes all operators previously suggested to be important for capturing the long-distance behavior of the present disordered models, namely operators involving 1-and 2replica terms. As will be shown further below, it also reduces to the leading results of perturbative RG analyses near the upper critical dimension, d uc = 6, near the lower critical dimension for ferromagnetism when N > 1, d = 4, and when the number of components N becomes infinite. One of its main advantages is that it provides a unified framework to describe models in any spatial dimension d and for any number N of field components. As such, it garantees a consistent interpolation of all known results in the whole (N, d) plane, in addition to allowing the study of genuine nonperturbative phenomena. If more accuracy is needed, the truncation scheme proposed in III-B gives a systematic means to refine the description, by including e.g. the third cumulant or a more detailed account of the momentum dependence of the 1 − P I vertices. In the following, we more specifically focus on the random field O(N ) model. A. Scaling dimensions near a zero-temperature fixed point For the RFIM, it has been proposed 64,65 , and convincingly supported by numerical and experimental results 6,40,66 , that the fixed point controlling the critical behavior associated with the transition between a high-temperature -or large-disorder strength -disordered (paramagnetic) phase and a low-temperature -or small-disorder strength -ordered (ferromagnetic) phase is at zero temperature (see Figure 1). The existence of 1: Schematic phase diagram of the RFIM in the disorder strength ∆ -temperature T plane above the lower critical dimension d lc = 2 (temperature can be introduced at the bare level through the Boltzmann weight). At low disorder and low temperature, the system is ferromagnetic, and it is paramagnetic otherwise. The arrows describe how the renormalized parameters evolve under the RG flow at long distance, and I and RF denote the critical fixed points of the pure and random-field Ising models, respectively. such a zero-temperature fixed point around which temperature is dangerously irrelevant leads to a somewhat anomalous scaling at the critical point. 64,65 The two independent critical exponents characterizing the scaling behavior of the pure Ising model should a priori be supplemented by an additional exponent θ describing the vanishing of the (renormalized) temperature as the fixed point is approached. This exponent θ leads to a modification of the so-called hyperscaling relation, which becomes 2 − α = (d − θ)ν where the critical exponents α and ν have their usual meaning, and to a new scaling of the correlation functions. In particular, the so-called "connected" and "disconnected" components of the pair correlation function (or 2-point Green function) behave at the critical point as: where η is the usual anomalous dimension of the field and η is related to the temperature exponent θ through Above the upper critical dimension d uc = 6, the exponents take their classical, mean-field values, η = 0, α = 0, ν = 1/2, and θ = 2, leading toη = 0. The dimensional reduction property leads to a constant shift of dimension, d → d − 2, i.e., to θ = 2 andη = η, all exponents being in addition given by those of the pure model in dimension d − 2. Whether the scaling behavior around the critical point is described by 3 independent exponents, or only 2, has been a long-time issue, with suggestions that an additional relation applies, θ = 2 − η or equivalentlȳ η = 2η. 67 We shall address and answer this question in the following paper. 32 To search for a zero-temperature fixed point, it is convenient to introduce a renormalized temperature. Actually, one could add an explicit temperature T in the Landau-Ginzburg-Wilson description of the model considered here: multiplying the argument of the exponential in the partition function, Eq. (4), by a factor T −1 to make the correspondence with the Boltzmann factor of Statistical Physics leads to a bare replicated action in Eqs. (6) and (10) in which the 1-replica part, including the kinetic term, is multiplied by a factor T −1 , the 2replica part by T −2 , etc. Generally speaking, one can use this temperature T as a book keeping device to sort the orders in the expansions in number of free replica sums. As a result for instance, the modified propagator P k [φ 1 , φ 2 ] is independent of T . One can use this book keeping trick to devise ways to define a renormalized temperature at running scale k, T k , which reduces to the "bare" temperature T at the microscopic scale k = Λ. To this end, we first define the renormalized disorder strength at scale k, ∆ m,k , as where as before φ m,k is a field configuration that minimizes the (1-replica) potential U k (φ), and ∆ k (φ 1 , φ 2 ) is the second cumulant of the renormalized effective random field defined as in Eq. (29), namely, In the present truncation, the second cumulant is only considered for homogeneous field configurations and Γ (11) k,2 reduces to V (11) k with the same notations for partial derivatives as in Eqs. (26,27) φ 2 )). At the microscopic scale Λ, ∆ m,k reduces to ∆ Λ /T 2 where ∆ Λ is the bare variance of the random field and the factor T −2 comes for reasons just explained above. A running temperature can now be defined by One checks that since Z m,Λ = T −1 (see Eq. (10) and discussion above), T k indeed reduces to T when k = Λ. An associated running exponent is obtained from By using the definition of η k , one may alternatively introduce a running exponentη k = 2 − θ k + η k , which converges to the critical exponentη defined in Eqs. (53,54) if the relevant fixed point is reached, and compute it from the equationη On top of the usual scaling dimensions, U k , V k ∼ k d and φ ∼ (Z −1 m,k k d−2 ) 1/2 , one can use the running temperature to define dimensionless quantities (denoted by lower-case letters) suitable for looking for a zero-temperature fixed point: with δ k (ϕ 1 , ϕ 2 ) = v (11) k (ϕ 1 , ϕ 2 ). Note that with the definitions of ∆ m,k and T k , δ m,k ≡ δ k (ϕ m,k , ϕ m,k ) is constant along the RG flow and equal to its initial value ∆ Λ /Λ 2 (in practice, and since we are not interested here in making a precise connection to the microscopic scale, we will set δ m,k = 1). B. Scaled form of the exact RG equations for the RFIM With the use of the above defined dimensionless renormalized quantities, the flow equations can be expressed in a scaled form. Specifically, one can recast Eqs. (47) and (48) for N = 1 in the form where ∂ t is a derivative with respect to t = ln(k/Λ), a prime denotes a derivative with respect to the field (when only one argument is present), v −1 d = 2 d+1 π d/2 Γ(d/2), and we recall that δ k (ϕ 1 , ϕ 2 ) = v with p(y) = y(1 + r(y)) and y = q 2 /k 2 . The properties of these threshold functions, whose detailed behavior depends on the choice of the infrared cut-off function r(y), have been extensively discussed. 36,51 They decay rapidly when w ≫ 1, which, since u ′′ k (ϕ) = U ′′ k (φ)/(Z k k 2 ) is the square of a renormalized mass, ensures that only modes with mass smaller than k contribute to the flow in Eqs. (61) and (62). As an illustration, the use of the so-called "optimized" cut-off function r(y) = y −1 (1 − y)Θ(1 − y), 52 leads to explicit expressions, namely, The threshold functions essentially encode the nonperturbative effects beyond the standard one-loop approximation. Note that, although not shown in the notation, the threshold functions explicitly depend on the scale k via the running exponent η k . The above flow equations for u k (ϕ 1 ) and v k (ϕ 1 , ϕ 2 ) are supplemented by equations for η k andη k , i.e., for Z m,k and T k or ∆ m,k . (Note that the equation forη k is actually redundant as it is a consequence of the other equations; it is nonetheless convenient to introduce and use it.) The flow equation for Z m,k follows from Eq.(50) and one finds: where we have used the short-hand notation δ ′ k (ϕ) ≡ ∂ ϕ δ k (ϕ, ϕ) = δ (10) k (ϕ, ϕ) + δ (01) k (ϕ, ϕ) and the subscript "m, k" indicates that the functions are evaluated for fields equal to ϕ m,k ; we have also introduced the additional (dimensionless) threshold function whose properties are discussed in Ref. [36,51]. For instance, with the "optimized" regulator introduced above, 52 one finds that Finally the flow equation for ∆ m,k (or equivalently the flow of the constraint δ m,k = 1 discussed below Eq. (60)) leads to the following equation: where, as before, δ ′ k (ϕ) ≡ ∂ ϕ δ k (ϕ, ϕ) and similarly for δ ′′ k (ϕ), and we have introduced and All other notations are as before. Before extending the results to the RFO(N )M, we point out important features of the above equations. First, we have kept terms proportional to T k but, provided one reaches a fixed point with an exponent θ = θ k→0 > 0 where temperature is thus irrelevant, those terms are subdominant in the scaling region k → 0. In particular, the fixed point is attained by following the flow with an initial temperature T equal to zero. Secondly, "anomalous" terms, Σ m,k and T k Σ m,k , appear in the expression of 2η k −η k . As can be inferred from Eqs. (70) and (71), Σ m,k can only differ from zero, and Σ m,k become infinite, when a non-analyticity (a "cusp") in (ϕ 1 − ϕ 2 ) appears in the (dimensionless) renormalized disorder function δ k (ϕ 1 , ϕ 2 ) when ϕ 2 → ϕ 1 (and both go to ϕ m,k ). If δ k (ϕ 1 , ϕ 2 ) is analytic, no signature of such anomalous behavior is found. (We have implicitly assumed that no stronger nonanalyticity appears, which means that a fixed point can be reached and that the theory is renormalizable; this has to be checked in actual computations.) We shall come back in more detail to these two important aspects of the NP-FRG approach in the following paper. 32 Finally, one may notice that because of the Z 2 ≡ O(1) symmetry, the potential u k is an even function of ϕ and because of the additional permutation symmetry, v k (ϕ 1 , C. Generalization to the RFO(N )M The preceding treatment can be extended to the RFO(N )M. The variable ρ = 1 2 |φ| 2 is written in terms of a dimensionless variable, ρ = k d−2 T −1 k Z −1 m,k ρ , where the tilde will be dropped in the following when no confusion is possible between dimensionless and dimensionful quantities. The variable z = φ 1 · φ 2 /(2 √ ρ 1 ρ 2 ) is already dimensionless. For the 1-replica second-order tensors (in N -vector components) evaluated for a uniform field configuration, e.g., for P the O(N ) symmetry reduces the number of terms to a "longitudinal" component (corresponding to the massive mode, see Eq. (49)) and N − 1 identical "transverse" components (corresponding to the Goldstone modes, see Eq. (49)). We therefore introduce and we define the longitudinal, w k,L (ρ), and transverse, w k,T (ρ), masses as where a prime now denotes a derivative with respect to ρ. The renormalized disorder strength at the running scale k can be characterized, e.g., through the transverse component, ∆ k,T (ρ, ρ, z = 1), evaluated for ρ = ρ m,k = 1 2 |φ m,k | 2 , and T k is introduced accordingly. Expressing the O(N ) symmetry in the 2-replica second-order tensors is a little more tedious, but nonetheless straighforward. The resulting flow equations in scaled form read (where for ease of notation we drop the subscript k in the righthand sides, i.e., up to a sign, the beta functions, for all quantities but T k and also drop the argument of v(ρ 1 , ρ 2 , z)): where all symbols have the same meaning as in the previous equations and, by construction, w L (ρ m ) = 2ρ m u ′′ (ρ m ), w T (ρ m ) = 0, and δ T (ρ m ) = 1. Note that in the last two equations, we have omitted for simplicity the (subdominant) terms involving T k in the beta functions and that in Eq. (80), the dots denote "anomalous" terms which generalize those found for the RFIM (see Eq. (69)) and vanish when the function v k (ρ 1 , ρ 2 , z) is analytic in all its arguments; their expression is lengthy and will be discussed in the companion paper. 32 When N = 1 and z = ±1, Eqs. (77) and (78) reduce to the previous equations for the RFIM, Eqs. (61) and (62), expressed with ρ as variable instead of φ: v k (ρ 1 , ρ 2 , z = +1) is equal to v k (ϕ 1 , ϕ 2 ) for ϕ 1 ϕ 2 > 0 and v k (ρ 1 , ρ 2 , z = −1) is equal to v k (ϕ 1 , ϕ 2 ) for ϕ 1 ϕ 2 < 0; δ k,L (ρ) ≡ δ k (ϕ) and w k,L (ρ) ≡ u ′′ (ϕ). 75 Finally, the comments made about the important features of the flow equations for the RFIM carry over to the equations for the RFO(N )M. D. Application to related disordered models Even though we have chosen to more specifically focus on the random field model, it is worth sketching at this point the relevance of the NP-FRG equations derived in this section to other disordered systems. (As stressed already several times, we exclude spin glass ordering from our considerations.) The flow equations obtained for the RFO(N )M, Eqs. (77)(78)(79), directly apply to the RAO(N )M for describing the long-distance physics associated with ferromagnetic ordering. The putative fixed points are also expected to be at zero temperature, so that similar scaling dimensions need be introduced. The specificity of the random anisotropy model comes in the initial conditions (see section II-A) and in the additional symmetry of the 2-replica potential, namely, v k (ρ 1 , ρ 2 , z) = v k (ρ 1 , ρ 2 , −z). Similarly, the flow equations for the RFIM, Eqs. (61,62,66), can be applied to the random elastic model. In this case, one can check that, owing to the statistical tilt symmetry, u ′ k (ϕ) ≡ 0 and η k ≡ 0 while v k (ϕ 1 , ϕ 2 ) ≡ v k (ϕ 1 − ϕ 2 ). After introducing the variable y = ϕ 1 − ϕ 2 and dropping the temperature, Eq. (62), can be rewritten as where a prime denotes a derivative with respect to y. The roughness exponent is defined through ζ = −(d−4+η)/2, and one can then see that the above equation reduces to the one-loop FRG equation for a disordered elastic medium. 28,44 Going beyond this level of description requires to consider the next orders of the truncation scheme, in particular to include the 3-replica potential and apply the next order of the derivative expansion for the 2-replica effective average action. Finally, Eqs. (61,62,66) can be used in the case of the random temperature model with an appropriate account of the symmetry: u k ≡ u k (ρ), v k ≡ v k (ρ 1 , ρ 2 ), with ρ = ϕ 2 /2. However, the scaling dimensions introduced to search for a zero-temperature fixed point are not appropriate in the present case where one anticipates a fixed point at a nonzero temperature (for a preliminary nonperturbative treatment, see Ref. [68]). V. RECOVERING THE PERTURBATIVE RESULTS A. Analysis of the NP-FRG equations near d = 6 and for N → ∞ For ease of notation, we only consider the RFIM, but a similar analysis holds for the RFO(N )M. It is easy to check that the flow equations, Eqs. (61,62,66,69), admit for fixed-point solution the Gaussian fixed point characterized by η − 2), and δ (G) * (ϕ 1 , ϕ 2 ) = 1. The Gaussian fixed point is once unstable for dimensions larger than 6, but the coupling constant associated with the ϕ 4 -term in u(ϕ) also becomes relevant for dimensions less than 6 so that the Gaussian fixed point becomes unstable for d < 6, as already well known. Equivalently, one can make a more direct connection to standard perturbation analysis by reframing the above results in a double expansion in ǫ and in the ϕ 4 coupling constant defined through λ k = u ′′′′ k (ϕ m,k ). Introducing as before ρ m,k = (1/2)ϕ 2 m,k , one obtains from Eqs. (77-80) that η,η = O(λ 2 ), δ = 1 + O(λ 2 ) and where we have used the Taylor expansion of the threshold functions for small arguments. (The fixed-point solution of Eqs. (85,86) is of course equal to that obtained above with λ * = ǫλ 1 * and ρ m * = ϕ 2 m * /2.) Again, up to irrelevant factors, this gives back the one-loop perturbative result for the pure Ising model obtained in a weak-coupling expansion in d = 4 − ǫ. The above result is derived through an expansion in a single coupling constant, λ k , associated to the 1-replica part of the effective action. It has been argued by Brezin and De Dominicis 69,70 that one should consider instead an expansion involving all ϕ 4 coupling constants associated with multiple replicas. In the present formalism, we can perform a more careful analysis using the ϕ 4 coupling constants associated with the 2-replica part of the effective action, coupling constants that are considered as potentially relevant in Refs. [69,70]. We find that this does not change the conclusion and, as previously obtained in Ref. [71], that the fixed point corresponding to dimensional reduction is still once unstable at first order in ǫ. This is discussed in more detail in Appendix A. The above analysis is extended to the O(N ) version in a straightforward way. The property that the perturbative result at first order in ǫ = 6 − d is recovered within our nonperturbative approximation scheme is actually a consequence of the one-loop-like structure of the exact flow equation for the effective average action, Eq. (16). For the very same reason, the large N limit can also be easily recovered. Rescaling the variables as ρ → N ρ, z → z and the potentials as u → N u, v → N v, and retaining only the dominant terms when N → ∞, one finds that η = O(1/N ), η = O(1/N ) and that the "longitudinal" contributions drop out from the RG flow equations. As a consequence, Eqs. (77) and (78) can be recast as where we have defined a generalized "transverse" disorder cumulant δ k,T (ρ 1 , ρ 2 , z) via an extension of Eq.(74), namely, which reduces to δ k,T (ρ) when ρ 1 = ρ 2 = ρ and z = 1. Eq. (88) is obtained by deriving the flow equation for v k (ρ 1 , ρ 2 , z). If one starts the flow equations with an initial condition v Λ (ρ 1 , ρ 2 , z) = 2 √ ρ 1 ρ 2 z (corresponding to δ Λ,T = 1), the beta function is identically zero and one therefore finds that the solution of Eq. (88) at all scales remains δ k,T (ρ 1 , ρ 2 , z) = 1. 76 The resulting equation for the 1replica potential is then very similar to its counterpart for the pure O(N ) model with N → ∞ limit in dimension d− 2 (the flow equation is then simply given by the LPA 36 ). To see more explicitly the connection, one can follow the flow of the ϕ 4 coupling constant λ k = u ′′ k (ρ m,k ) as well as that of ρ m,k which, we recall, satisfies u ′ k (ρ m,k ) = 0 and is akin to a (dimensionless) order parameter at the running scale k. One finds which results in the nontrivial fixed point ρ m * = 2v d l (d) 3 (0)). This fixed point is once unstable (and it remains so when considering the additional directions associated with the 2-replica potential, see above) and is characterized by critical exponents satisfying the dimension reduction property, e.g., ν = 1/(d − 4) to be compared to ν = 1/(d − 2) for the pure model. Note that the above perturbative expressions are recovered from the truncated NP-FRG equations even with an additional approximation using a field expansion around the minimum of the 1-replica potential. A strong property of the minimal nonperturbative truncation described above is that it also reduces, in the appropriate limit and for the RFO(N > 1)M, to the perturbative FRG equations at first order in ǫ = d−4 derived by Fisher. 23 The latter are obtained from a low-disorder loop expansion of the nonlinear sigma model associated with the RFO(N )M. It is therefore quite remarkable that our formalism in which no hard constraint is enforced leads to the proper result within the minimal approximation scheme. For the RFO(N )M with N > 1, d = 4 is the lower critical dimension for ferromagnetism. (We mean here long-range ferromagnetic order with a nonzero order parameter, the case of quasi-long range order will be discussed later on.) As a result, the critical point and the associated fixed point occur near d = 4 for a value of ρ m that diverges as 1/ǫ with ǫ = d − 4. As in the case of the pure O(N ) model near d = 2, 37 one can therefore organize a systematic expansion in powers of 1/ρ m . At the minimum of the 1-replica potential (ρ = ρ m ), the transverse mass, associated with the Goldstone modes, is zero whereas the longitudinal mass is very large and scales as ρ m (anticipating that u ′′ (ρ m ) does not vanish). One can then use the asymptotic properties of the threshold functions for large arguments, which encodes the decoupling of the massive mode. In addition, we assume that as ρ m → ∞, δ L,T (ρ m ) stay finite (recall that actually, δ T (ρ m ) = 1) and that their derivatives, δ ′ L,T (ρ m ), etc, go to zero at least as fast as 1/ρ m ; on the other hand, ρ m is a singular point for u(ρ) (the location of its minimum), so that even when we expect that u ′′ (ρ m ), u ′′′ (ρ m ), etc, stay of O(1). The consistency of these assumptions is easily checked a posteriori. Inserting the above results and assumptions in Eqs. (79) gives which shows that η is of order 1/ρ m . Deriving once the flow equation for the 1-replica potential u k (ρ) leads to from which one obtains the flow equation for the running order parameter ρ m,k : where ǫ = d − 4. (Note that we have again omitted the subscript k in the right-hand sides and dropped the subdominant terms involving the renormalized temperature T k .) The last equation shows that the fixed point value of ρ m,k satisfies, as anticipated, ρ m * = O(1/ǫ), which results in η,η = O(ǫ). One can now apply a similar treatment to the flow equation for the 2-replica potential evaluated for ρ 1 = ρ 2 = ρ m,k . For convenience, we introduce the function which, due to Eq. (74) and the constraint δ k,T (ρ m,k ) = 1, satisfies R ′ k (z = 1) = 1/(2ρ m,k ). 77 The flow equation for R k (z) can be expressed as ∂ t R k (z) = 1 (2ρ m,k ) 2 ∂ t v k (ρ, ρ, z)| ρ=ρ m,k + ∂ t ρ m,k ∂ ρ v k (ρ, ρ, z) (2ρ) 2 | ρ=ρ m,k , which with the help of Eq. (96) finally leads to To dominant order in ǫ, one can set d = 4 in v d and l where v −1 4 = 32π 2 and R k (z) is of order ǫ near its fixed point. The above equations coincide with the oneloop perturbative FRG equations derived by Fisher. 23 Note that this result is independent of the choice of the infrared cut-off function R k (q 2 ): indeed, one easily checks that not only l Finally, we note that setting N = 2 and introducing the variable φ = cos −1 (z) in Eq. (101) leads to which, after use of Eq. (100) for η k andη k , coincides with the 1-loop perturbative FRG equation for a disordered periodic elastic system with a one-component displacement field: compare for instance with Eq. (81), in which one should set ζ = 0 due to the periodicity. 72 (Be careful, however, that η k andη k denote different sets of exponents in the formalism leading to Eq. (81) and in the present one.) 78 VI. CONCLUDING REMARKS In this work, described in the present paper and in the following one, 32 we have developed a theoretical approach which is able to describe the long-distance physics, criticality, phase ordering or "quasi"-ordering, of systems in the presence of quenched disorder, in particular random field models for which standard perturbation theory is known to fail. The approach is based on an exact renormalization group equation for the effective average action (the generating functional of 1 − P I vertices) and on a nonperturbative truncation scheme. This nonperturbative RG formalism has recently been applied with success to a variety of systems. The key point in the present problem is to provide a proper account of the renormalized distribution of the quenched disorder, and we have shown that this can be conveniently done through a cumulant expansion and the use of a replica method in which the permutational symmetry among replicas is explicitly broken. We have stressed that any relevant treatment of random field models and related disordered systems must include the second cumulant of the renormalized disorder, i.e., at least a function of two (replica) field argu-ments. Accordingly, we have proposed a nonperturbative approximation scheme. Within this scheme, the minimal truncation for the RFO(N )M already reproduces the leading results of perturbative RG analyses near the upper critical dimension, d uc = 6 and when the number of components N becomes infinite. More importantly, it gives back the perturbative FRG equations near the lower critical dimension for ferromagnetism when N > 1, d = 4. One of the main advantages of the present approach, which will be illustrated in the following paper, is that it provides a unified framework to describe models in any spatial dimension d and for any number N of field components. As such, it garantees a consistent interpolation of all known results in the whole (N, d) plane, in addition to allowing the study of genuine nonperturbative phenomena. We thank D. Mouhanna for helpful discussions.
2007-12-20T19:50:27.000Z
2007-12-20T00:00:00.000
{ "year": 2007, "sha1": "f9eec8cc3ca55f74a7dfb6c4f68f1e34df481acb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0712.3550", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f9eec8cc3ca55f74a7dfb6c4f68f1e34df481acb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246336085
pes2o/s2orc
v3-fos-license
Chronoradiobiology of Breast Cancer: The Time Is Now to Link Circadian Rhythm and Radiation Biology Circadian disruption has been linked to cancer development, progression, and radiation response. Clinical evidence to date shows that circadian genetic variation and time of treatment affect radiation response and toxicity for women with breast cancer. At the molecular level, there is interplay between circadian clock regulators such as PER1, which mediates ATM and p53-mediated cell cycle gating and apoptosis. These molecular alterations may govern aggressive cancer phenotypes, outcomes, and radiation response. Exploiting the various circadian clock mechanisms may enhance the therapeutic index of radiation by decreasing toxicity, increasing disease control, and improving outcomes. We will review the body’s natural circadian rhythms and clock gene-regulation while exploring preclinical and clinical evidence that implicates chronobiological disruptions in the etiology of breast cancer. We will discuss radiobiological principles and the circadian regulation of DNA damage responses. Lastly, we will present potential rational therapeutic approaches that target circadian pathways to improve outcomes in breast cancer. Understanding the implications of optimal timing in cancer treatment and exploring ways to entrain circadian biology with light, diet, and chronobiological agents like melatonin may provide an avenue for enhancing the therapeutic index of radiotherapy. Introduction Decades of research demonstrate that radiation responses vary across an organism's circadian period. The emerging field of chronoradiobiology examines the biological relationships between the complex mechanisms of circadian regulation and cellular radiation responses with the goal of improving the therapeutic index of radiation treatments. Understanding circadian regulation, disruptions, and downstream effects that can impact radiation therapy could lead to potential improvements for patients. Data on circadian disruption and clock gene regulation may lead to new approaches to personalize care. In this review, we explore the fundamentals of chronobiology, focusing on the relationship to breast cancer pathogenesis, treatments, toxicity, and outcomes. The epidemiological and molecular associations of breast cancer with circadian pathways will be discussed as well as the interplay of circadian clock genes and radiation therapy. We then propose practical methods for leveraging circadian rhythms that may someday be used in radiotherapy, with potential roles for time-restricted diets and chronopharmaceuticals. Chronobiology Chronobiology-the study of biological rhythms and the biomolecular clockwork that drives them-has been implicated in the initiation of human disease events like Figure 1. Hours of daily and nightly maxima for selected hormones and processes. These periodic oscillations are kept approximately constant via circadian rhythms of neuroendocrine signaling, which in turn is regulated by circadian clock genes [7,16]. RAAS = renin-angiotensin-aldosterone system; T = testosterone; DLMO = dim-light melatonin onset; GH = growth hormone; TSH = thyroid stimulating hormone; T3 = triiodothyronine; FGF = fibroblast growth factors. Adapted with permission from ref [16], copyright 2018 Springer Nature. Hours of daily and nightly maxima for selected hormones and processes. These periodic oscillations are kept approximately constant via circadian rhythms of neuroendocrine signaling, which in turn is regulated by circadian clock genes [7,16]. RAAS = renin-angiotensin-aldosterone system; T = testosterone; DLMO = dim-light melatonin onset; GH = growth hormone; TSH = thyroid stimulating hormone; T 3 = triiodothyronine; FGF = fibroblast growth factors. Adapted with permission from ref [16], copyright 2018 Springer Nature. Clock Genes These physiologic oscillations are driven by a highly conserved set of clock genes that form an autoregulatory transcription-translation feedback loop. The classical model of this core clock network is illustrated in Figure 2. For consistency, some common protein aliases will be used throughout this review, i.e., ARNTL will be referred to as "BMAL1", and NR1D1 will be referred to as "REV-ERBα". The heterodimer BMAL1:CLOCK binds E-box DNA response elements and enhances the transcription of clock-controlled genes, which include other clock proteins, i.e., isoforms of PER, CRY, REV-ERB, and ROR. In addition to their numerous regulatory activities in the cytosol (including DNA damage These proteins enforce their regulatory effects through a host of known mechanisms: regulating the activity of transcription factors, conditionally dimerizing with different partners, binding enzymes to modulate activity, and facilitating posttranslational modifications like phosphorylation and acetylation [16]. For simplicity, core clock proteins are often grouped into a "positive limb" that drives the clock forward (BMAL1, CLOCK, RORα) and a "negative limb" that opposes it (PER1, PER2, PER3, CRY1, CRY2, REV-ERBα, DEC1, DEC2) ( Table 1). Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. [3,38,39,52] RORα binds RRE to promote transcription of BMAL1 Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. Table 1. Proteins of the classical core clock network. These drive circadian rhythms at the level of the cell, intersecting with multiple cancer control pathways. They are often divided into (a) positive and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. and (b) negative limbs. For consistency, some protein aliases will be used in this review, i.e., ARNTL will be referred to as "BMAL1", NR1D1 as "REV-ERBα", and BHLHE40/41 as "DEC1/2". BC = breast cancer; DDR = DNA damage response; HIF = hypoxia-inducible factor. [32,53,54] All nucleated somatic cells exhibit self-sustaining circadian rhythms that emerge from this biomolecular clockwork [39]. Although circadian rhythms are distinct from the cell cycle, clock proteins can regulate the expression and activity of key players in cell cycle progression [55,56]. The core clock network has been shown to regulate circadian rhythms of quiescence [57], stemness, plasticity, and timed gating of cell cycle progression [57] in various cell types. Understanding the relationships between negative and positive regulators of the clock system is essential to understanding how to harness their benefit. [32,53,54] All nucleated somatic cells exhibit self-sustaining circadian rhythms that emerge from this biomolecular clockwork [39]. Although circadian rhythms are distinct from the cell cycle, clock proteins can regulate the expression and activity of key players in cell cycle progression [55,56]. The core clock network has been shown to regulate circadian rhythms of quiescence [57], stemness, plasticity, and timed gating of cell cycle progression [57] in various cell types. Understanding the relationships between negative and positive regulators of the clock system is essential to understanding how to harness their benefit. Hierarchical Organization and Zeitgebers BC tumor suppression, outcomes, receptor status Delayed S phase in BC cells Epithelial-to-mesenchymal transition [32,53,54] All nucleated somatic cells exhibit self-sustaining circadian rhythms that emerge from this biomolecular clockwork [39]. Although circadian rhythms are distinct from the cell cycle, clock proteins can regulate the expression and activity of key players in cell cycle progression [55,56]. The core clock network has been shown to regulate circadian rhythms of quiescence [57], stemness, plasticity, and timed gating of cell cycle progression [57] in various cell types. Understanding the relationships between negative and positive regulators of the clock system is essential to understanding how to harness their benefit. Hierarchical Organization and Zeitgebers Circadian rhythms can only approximate a 24-h period. To prevent cells, tissues, and organs from desynchronizing into their own independent rhythms, the suprachiasmatic nuclei (SCN) of the hypothalamus acts as a central circadian pacemaker [58]. The SCN defines the whole-organism circadian phase-the physiologic time of day-which is communicated to the rest of the body through neuroendocrine signals [59]. Because the cells of the SCN are governed at the molecular level by clock proteins, natural circadian periods can be slightly longer or shorter than 24 h. Genetic and epigenetic variations in the core clock network result in an emergent characteristic known as an individual's chronotype [60][61][62]. Faster clockwork means shorter endogenous periods, resulting in a chronotype that leads to a propensity for "morningness"; slower clockwork leads to a propensity for "eveningness" [58]. Uncorrected, these slight deviations would lead to free-running periods that are out of phase with the Earth's rotation; however, the central pacemaker's circadian rhythm can be realigned through circadian entrainment ( Figure 3). To keep in line with the day-night cycle, the hypothalamus takes input from environmental time cues-referred to as zeitgebers or "time givers"-and uses this information to calibrate the suprachiasmatic nuclei [6]. This central pacemaker then sends signals throughout the body via different pathways that include the pineal gland's production of the hormone melatonin [63]. This coordinates the body's peripheral clocks, producing circadian oscillations in cell activity and organ function. These outputs in turn provide feedback to the central pacemaker in the form of secondary zeitgebers like food intake, exercise, and body temperature [28,64,65]. Light is the body's primary zeitgeber. Bright light that contains blue wavelengths (e.g., daylight, standard electrical lighting) stimulates a nonvisual pathway from the retina to the central circadian pacemaker in the hypothalamus [5]. In addition to other alerting effects, this acutely suppresses the release of melatonin from the pineal gland [6,63]. Blue light suppresses melatonin in a dose-dependent fashion [66] while triggering Light is the body's primary zeitgeber. Bright light that contains blue wavelengths (e.g., daylight, standard electrical lighting) stimulates a nonvisual pathway from the retina to the central circadian pacemaker in the hypothalamus [5]. In addition to other alerting effects, this acutely suppresses the release of melatonin from the pineal gland [6,63]. Blue light suppresses melatonin in a dose-dependent fashion [66] while triggering other neurologic responses [6]. Timed exposures to polychromatic or blue-enriched light have been used both to advance and delay the circadian phase in humans [40,67]. Chronic exposure to light at night shifts the circadian phase later and has been shown to diminish the amplitude of melatonin released each night [59,68]. This forms the basis whereby rotating shift work or long-term exposure to artificial light at night leads to circadian disruption. Artificial lights are used to treat major depressive disorder with seasonal pattern [69], but they may also be used to intentionally calibrate the central circadian pacemaker (see Figure 4) resulting in entrainment to a new, phase-shifted period (see Figure 3) [40,67]. Special lighting arrays are currently in use aboard the International Space Station for this purpose, to properly entrain the circadian rhythms of astronauts who do not experience 24-h day-night cycles [41,70]. In combination with proper scheduling, bright-blue light has been shown to be effective at helping combat jetlag and treating specific neuropsychiatric conditions on Earth [36]. Altogether, this maintains the harmony between body and ecosystem; the circadian phase of each organ system (i.e., their physiologic time) remains in sync with one another, and the whole system is aligned with the environment (i.e., the external time) as outlined in Figure 4. The concept of entrainment and its ability to alter circadian rhythms will be crucial for harnessing the benefits of chronobiology for therapeutic interventions. Figure 2). Without signals from the central pacemaker, organs and systems can uncouple into free-running rhythms. SCN = suprachiasmatic nuclei. Circadian Amplitude Circadian rhythms depend on a robust circadian amplitude-the number of clockdriven proteins that are expressed, or the degree to which they oscillate over 24 h (see Figure 3). It follows that the activity of antitumor pathways that rely on clock proteins will vary with clock gene expression, posttranslational modification, and localization. Disrup- Circadian Amplitude Circadian rhythms depend on a robust circadian amplitude-the number of clockdriven proteins that are expressed, or the degree to which they oscillate over 24 h (see Figure 3). It follows that the activity of antitumor pathways that rely on clock proteins will vary with clock gene expression, posttranslational modification, and localization. Disruptions to the rhythm decrease the circadian amplitude [59,71]. To illustrate this at the biomolecular level: the function of the gamete-expressed protein PAS domain containing receptor 1 (PASD1) is to suppress the circadian amplitude, and it becomes oncogenic when expressed ectopically in somatic cells [72] by inhibiting apoptosis [73]. Circadian Disruptions and Breast Cancer This section will delineate the importance of chronobiology in the disease progression of breast cancer by demonstrating epidemiological evidence and molecular associations. Epidemiology Researchers have characterized extensive relationships between circadian dysfunction and human cancer development, prognosis, and treatment [22,25]. In fact, shift work that involves circadian disruption has been recognized by the World Health Organization's International Agency for Research on Cancer as a probable human carcinogen (Group 2A) beginning in 2007 [74]. Since then, many studies have corroborated the increased incidence of breast cancer in women exposed to light at night, an occupational hazard of shift work. Recent epidemiological studies are summarized in Table 2. The correlation is strongest with longer exposures to occasional night shifts (>20 years), shorter durations of continuous night shift, or when the history of shift work occurred in early adulthood. The increased breast cancer incidence in night shift workers is likely multifactorial but may be related in part to melatonin suppression arising from exposure to artificial light at night [21]. Understanding the biological basis for the epidemiological observations represented in Table 2 may illuminate the clinical implications of circadian function. Impact of Circadian Disruption on Health Disparities Epidemiologic data suggest that Black and African American (AA) patients experience a greater burden of circadian disruptions due to social impacts and other external factors that alter levels of melatonin in the body. AA patients experience worse breast cancer outcomes even when treated with the same therapies, and circadian disruptions might be one factor in this disparity. In the United States, more AA workers perform rotating shiftwork than their Caucasian counterparts, and this disparity is expected to increase over time [42]. There are data that further explore the link between poor sleep quality and the development of triple-negative breast cancer in AA women [87]. In addition, AA patients may have a slower response to circadian phase shifts than Caucasian Americans, suggesting that the effects of circadian disruption might be longer-lived, allowing their risks to compound overtime [88]. Together, the higher rates of night-time shiftwork among AA patients and the resulting circadian and estrogen perturbations indicate further research to explain the nuances of their effects. Melatonin and Breast Cancer Melatonin, 5-methoxy-N-acetyltryptamine, is a naturally occurring hormone that is produced from tryptophan by the pineal gland. It is secreted in response to the environmental change from light to darkness. In humans, this helps synchronize organ systems in anticipation of the inactive or "rest" phase (see Figure 4). In all mammals, melatonin peaks in the evening, whereas cortisol peaks in the morning. In fact, the time at which melatonin begins to spike in the evening (dim-light melatonin onset, DLMO) is the current gold standard for measuring an individual's circadian phase. Cortisol release as well is partly under the control of the suprachiasmatic nuclei [59]. Melatonin dysregulation is also linked to cancer development since it impacts anaerobic glycolysis, DNA repair, and angiogenesis [89]. Breast cancer models have shown that light-induced melatonin suppression leads to increased blood glucose and facilitates tumor cell proliferation; conversely, increased melatonin decreases the Warburg phenomenon and inhibits tumor growth [43]. Retrospective analyses have shown decreased night-time melatonin in women with estrogen receptor-positive (ER+) breast cancer and correlations between tumor size and peak level of night-time melatonin [90]. This is consistent with the murine xenograft breast cancer model in which transfusions of melatonin-rich blood significantly reduced tumor burden in comparison to transfusions from age-matched women whose serum melatonin levels had been suppressed by exposure to bright light [21,43]. Melatonin also decreases estrogen production, which may occur via the CRY-interacting protein TIMELESS that regulates sphingolipid metabolism-directed breast cancer cell growth [91]. Molecular Clock Dysfunction and Breast Cancer Risk Different clock gene mutations and expression patterns have been implicated in the development of several cancers as well as poorer outcomes. With respect to breast cancer development, rhythmic clock gene expression is suppressed or obliterated in more aggressive cancer types, whereas a functional circadian clock is often retained in ER+, human epidermal growth factor receptor 2-negative (HER2-), low-grade breast cancers that have not yet metastasized [51]. This underscores the importance of a balanced circadian network; loss of positive-limb function reduces circadian amplitude, which can result in the loss of tumor-suppressing activity from the negative limb ( Figure 5) [92]. Molecular Clock Dysfunction and Breast Cancer Risk Different clock gene mutations and expression patterns have been implicated in the development of several cancers as well as poorer outcomes. With respect to breast cancer development, rhythmic clock gene expression is suppressed or obliterated in more aggressive cancer types, whereas a functional circadian clock is often retained in ER+, human epidermal growth factor receptor 2-negative (HER2-), low-grade breast cancers that have not yet metastasized [51]. This underscores the importance of a balanced circadian network; loss of positive-limb function reduces circadian amplitude, which can result in the loss of tumor-suppressing activity from the negative limb ( Figure 5) [92]. . Pathway from artificial light at night to breast cancer formation. This flowchart illustrates the purported sequence of events predisposing night shift workers to breast cancer. Exposure to artificial light at night and other improperly timed cues like meals leads to circadian disruption, blunting the nightly secretion of melatonin. If the suprachiasmatic nuclei fail to integrate conflicting time signals (compromising appropriate clock gene expression), this diminishes their ability to synchronize tissues and organs, leading organ systems to develop asynchronous free-running rhythms. This inconsistent signaling can disrupt the core clock network of individual cells; clock gene dysfunction makes cells more oncogenic and tumor permissive. At the cellular level, circadian rhythms are coordinated by the network of core clock proteins (see Figure 2). See also Section 4.2 for a schematic of cell cycle gating, a key component the DNA damage response. Available meta-analyses of cancer patients' clock gene expressions did not account for time of sample collection; nevertheless, differential patterns have been described. Low PER1 and PER2 expression is linked to breast cancer development and poorer outcomes [71]. Comparing breast cancer to adjacent tissue, PER1, PER2, PER3, and CRY2 levels are decreased; CLOCK is increased; and CRY1 downregulation was found to escalate directly Figure 5. Pathway from artificial light at night to breast cancer formation. This flowchart illustrates the purported sequence of events predisposing night shift workers to breast cancer. Exposure to artificial light at night and other improperly timed cues like meals leads to circadian disruption, blunting the nightly secretion of melatonin. If the suprachiasmatic nuclei fail to integrate conflicting time signals (compromising appropriate clock gene expression), this diminishes their ability to synchronize tissues and organs, leading organ systems to develop asynchronous free-running rhythms. This inconsistent signaling can disrupt the core clock network of individual cells; clock gene dysfunction makes cells more oncogenic and tumor permissive. At the cellular level, circadian rhythms are coordinated by the network of core clock proteins (see Figure 2). See also Section 4.2 for a schematic of cell cycle gating, a key component the DNA damage response. Available meta-analyses of cancer patients' clock gene expressions did not account for time of sample collection; nevertheless, differential patterns have been described. Low PER1 and PER2 expression is linked to breast cancer development and poorer outcomes [71]. Comparing breast cancer to adjacent tissue, PER1, PER2, PER3, and CRY2 levels are decreased; CLOCK is increased; and CRY1 downregulation was found to escalate directly with breast cancer stage [93]. Silencing of the negative-limb regulator DEC2, a purported intermediary between circadian rhythm and tumor progression, enhanced the viability, invasiveness, and colony-forming potential of breast cancer samples [32]. Differences in the function of the timeless circadian regulator protein (TIMELESS), an effector of the core clock, have been correlated specifically with ER+ and progesterone receptor-positive (PR+) breast cancers, i.e., this is another example where clock effector function has been linked to hormone-sensitive cancer development. Similarly, different levels of DEC1 and DEC2 mRNA were measured among breast cancer populations, with increased expression in PR+ cases and decreased expression in HER2+ cases [54]. The CRY2 genotype of breast cancer patients has also been correlated with ER status [37,44,94], and PER3 loss is associated with recurrent ER+ tumors [95]. For each of these, future research might consider whether these decreases represent lower peak levels (i.e., a deficiency in their circadian maxima) or constitutively reduced baseline levels around the clock. In terms of genetic predisposition, logistic regression analyses have linked different CLOCK, CRY1, and PER2 genotypes to breast cancer risk [37], and several studies have implicated specific single-nucleotide polymorphisms [45]. Specific TIMELESS alleles have been correlated with hormone-sensitive breast cancers; furthermore, hypomethylation of the TIMELESS promoter is implicated in higher-stage breast cancers, and breast cancer has been shown to overexpress TIMELESS relative to normal breast tissue [37]. Breast Cancer Outcomes and Treatment Response Clock gene expression patterns have been observed in breast cancers with different clinical features, although again it is unclear whether this reflects constitutive downregulation of certain circadian regulators or a deficiency of their rhythmic peaks. Broadly, higher expressions of PER1, PER2, PER3, and CRY2 were associated with longer metastasis-free survival, and distinct prognostic patterns were found to correlate with different changes in clock gene expression depending on ER, PR, and HER2 status [93,96]. These molecular changes may underlie the downstream effects of exposures that have been implicated in breast cancer progression. For example, animal models showed that light at night disrupted nocturnal melatonin signaling, which ultimately disinhibited the growth and metabolism of breast cancer cells [97]. Light-induced melatonin suppression has also been associated with the resistance of breast cancer xenografts to chemotherapy and tamoxifen [98,99]. There are also data suggesting that responses to radiation therapy are impacted by circadian factors. The heart, an important organ at risk during radiation for breast cancer, has been shown to be at higher risk for toxicity based on circadian disruption. Mice with disrupted circadian rhythms, either through environmental sleep disruption or genetic Per disruption, had more post-radiation cardiac dysfunction and increased fibrosis [100]. There are also clinical trials showing that the time radiation therapy is given impacts the outcomes [101,102], which will be further discussed in Section 6, chronoradiotherapy. The ability of therapeutic radiation to effectively treat cancer relies on the ability to overcome tumor cells' DNA damage response and ability for repair. It is clear that circadian factors and clock genes regulate the cell cycle and therefore would have an impact on radiation treatment [103]. Radiobiological Principles When targeting solid tumors, radiation oncologists often take advantage of the "four Rs of radiation biology" including repair of DNA damage, redistribution of cells in the cell cycle, repopulation, and reoxygenation of hypoxic tumor areas, which have all been shown to be influenced by circadian regulation. Radiation treatments are often fractionated or given one treatment a day over a period of several weeks. The interval between each radiation dose gives the surviving tumor cells time to redistribute across the cell cycle so that a new portion of tumor cells will progress to G2/M, which is beneficial because radiation and the reactive oxygen species it produces are more lethal to cells in the G2/M phase [104]. Daily treatments also ensure that the solid tumor is allowed to reoxygenate with the blood vessel network created by the tumors, allowing for increased delivery of oxygen after each treatment, increasing the tumor's sensitivity to the next dose while allowing normal cells in the vicinity to repair sublethal DNA damage and begin repopulating the surrounding tissue [46]. Understanding how biological alterations of circadian function can alter the "four Rs" will allow for the development of strategies that improve the therapeutic index of radiation treatments, i.e., maximizing its efficacy at killing malignant cells while minimizing its toxicity to normal tissue. Cell Cycle Gating Rodent models have long shown that mammals have differential radiation responses according to the circadian phase when they are irradiated. Mouse models have shown lower levels of DNA repair in skin cells in the morning, causing higher susceptibility to ultraviolet radiation. Even LD 50 , representing the lethal dose of X-ray irradiation in animal models, has been shown to oscillate as a function of time [47,105]. Similarly, individual cells are most sensitive to radiation in the G2/M phase of the cell cycle, which is subject to clock gene regulation and the circadian phase [23,57,106] (Figure 6). oxygen after each treatment, increasing the tumor's sensitivity to the next dose while allowing normal cells in the vicinity to repair sublethal DNA damage and begin repopulating the surrounding tissue [46]. Understanding how biological alterations of circadian function can alter the "four Rs" will allow for the development of strategies that improve the therapeutic index of radiation treatments, i.e., maximizing its efficacy at killing malignant cells while minimizing its toxicity to normal tissue. Cell Cycle Gating Rodent models have long shown that mammals have differential radiation responses according to the circadian phase when they are irradiated. Mouse models have shown lower levels of DNA repair in skin cells in the morning, causing higher susceptibility to ultraviolet radiation. Even LD50, representing the lethal dose of X-ray irradiation in animal models, has been shown to oscillate as a function of time [47,105]. Similarly, individual cells are most sensitive to radiation in the G2/M phase of the cell cycle, which is subject to clock gene regulation and the circadian phase [23,57,106] (Figure 6). checkpoints that depend on or are regulated by core clock proteins, whose levels and activity fluctuate over the circadian period [32,48,53,57,[107][108][109][110][111]. Some cell cycle gating mechanisms are induced by DNA damage, such as CHEK2:ATM and CHEK1:ATR. Cells paused at the G2/M checkpoint by DNA damage responses (DDR) or other mechanisms are prevented from progressing into mitosis, keeping them from passing along mutations while increasing their radiosensitivity. Regulators of cell cycle progression are regulated in turn by components of the core clock network-i.e., positive-and negative-limb clock proteins, whose cellular activities oscillate over 24 h. Depending on cell type, this leads to windows of opportunity for cell division and a bidirectional relationship between the circadian period and the cell cycle. For example, a stalled DNA replication fork can trigger a CRY-and TIMELESS-dependent pathway that prevents the cell from proceeding through G2/M, but mouse models have shown that this checkpoints that depend on or are regulated by core clock proteins, whose levels and activity fluctuate over the circadian period [32,48,53,57,[107][108][109][110][111]. Some cell cycle gating mechanisms are induced by DNA damage, such as CHEK2:ATM and CHEK1:ATR. Cells paused at the G2/M checkpoint by DNA damage responses (DDR) or other mechanisms are prevented from progressing into mitosis, keeping them from passing along mutations while increasing their radiosensitivity. Regulators of cell cycle progression are regulated in turn by components of the core clock network-i.e., positive-and negative-limb clock proteins, whose cellular activities oscillate over 24 h. Depending on cell type, this leads to windows of opportunity for cell division and a bidirectional relationship between the circadian period and the cell cycle. For example, a stalled DNA replication fork can trigger a CRY-and TIMELESS-dependent pathway that prevents the cell from proceeding through G2/M, but mouse models have shown that this response depends on the circadian availability of CRY [110,111]. The respective networks that drive these manifold processes have been shown to meet at multiple regulatory nodes relevant to oncogenesis and cancer progression, e.g., the Sprolonging effect of DEC1 was observed to suppress growth in a breast carcinoma xenograft model [53]. A key component of the interplay between circadian rhythms and the cell cycle is the activity of BMAL1:CLOCK, which communicates between the clock network, c-MYC, and WEE1 [16,23,56]. As a result, chronobiology can influence the mitotic index of a particular tumor type, e.g., the radiosensitivity of human nasopharyngeal carcinoma xenografts has been shown to oscillate with the circadian period [106]. Double-Strand DNA Breaks The circadian clock has been shown to gate several points in DNA damage response (DDR) pathways. For example, in response to the double-strand breaks in DNA caused by radiation, the DDR requires PER1 to bind ATM:CHEK2 in order for it to halt cell cycle progression and trigger p53-mediated apoptosis if the damage persists (see Figure 6). Ectopic PER1 expression in human cancer cell lines impairs malignant growth, and reduced levels of endogenous PER1 is found in human breast cancer. PER2 has also been shown to operate both as a tumor suppressor as well as an important facilitator of the DDR. In murine models, PER2 was necessary for the radiationinduced upregulation of clock gene proteins that resulted in better tumor suppression and survival [112]. In human cells, both PER1 and PER2 have been shown to facilitate apoptotic pathways driven by the tumor suppression protein p53 [44,48,74]. Together, this indicates that, perhaps, radiation efficacy could be potentiated in certain cells during times with high levels of PER. These high-PER periods could either be predicted from a patient's circadian phase or induced by manipulating zeitgebers like food intake. Notably, double-stranded DNA breaks can only be repaired by enzymes that can perform homologous recombination, of which BRCA1 and BRCA2 are examples, which could suggest that proper PER function may be a last line of defense for cells bearing BRCA mutations. In cells that lack a functional BRCA, this would make PER perturbations especially hazardous for the stability of the genome. It is also possible that their respective DDR pathways are not entirely redundant; BRCA1 has been shown to interact with PER1 and PER2 in a yeast two-hybrid model [48,96,113], and specific PER mutations are purported to predict BRCA patients' response to chemotherapy and survival [114]. Hypoxia Responses and Reoxygenation Shifting the oxygenation profile is also key to helping with radiation sensitivity. Interestingly, the classical role of BMAL1:CLOCK is to promote the transcription of genes whose promoters contain the E-box element, like the gene for hypoxia-inducible factor 1α (HIF-1α),the subunit of HIF-1 whose expression is regulated by oxygen levels [25,48]. The relationship between HIF-1 activity and circadian regulation, however, is more nuanced than "positive-limb proteins promote, negative-limb proteins inhibit"; for example, PER2 recruits HIF-1 to its target genes [115]. Though beyond the scope of this review, circadian and HIF pathways constitute yet another bidirectional relationship, in this case serving to regulate metabolic adaptations to low oxygen levels [30]. It has been suggested that any circadian reprogramming that leads to an overexpression of HIF-1α can open a path for malignant transformation. HIF-1α is overexpressed in tumor cells, enabling several adaptations like the upregulation of angiogenic factors and glycolytic enzymes to maintain ATP production in the absence of more complex pathways like fatty acid oxidation. Shen et al. note that, because circadian dysrhythmia is known to increase the radiosensitivity of healthy tissue, the altered clock networks of tumor cells in HIF-1α-driven cancers may make them more sensitive to radiation than the healthy tissues surrounding them [25]. Epidemiologic Data Linking Metabolism and Circadian Dysregulation Desynchronization of circadian rhythms have also been implicated in the etiology of metabolic diseases like obesity, type 2 diabetes, and cardiovascular disease [31,58,[116][117][118]. People who are obese or diagnosed with type 2 diabetes have more circadian dysfunction, potentially leading to an increased incidence of cancer or progression of disease. Changing the time that a person eats or the composition of their diet can alter and decouple the peripheral circadian clock. It has been shown that shift work is linked to higher risk of both obesity and diabetes [119]. Energy Sensing The circadian phase modulates dietary processes, from an organism's food-seeking behavior all the way down to the nutrient preference of an individual cell. Food intake can act as a secondary zeitgeber to modulate cells' circadian phase. Circadian rhythms evolved in order to encourage feeding and anticipate nutrient availability during the active phase [1]. For example, BMAL1 activity increases the availability of NAD + which ultimately activates liver enzymes that are involved in fatty acid oxidation, increasing ATP production, and decreasing the preference for monosaccharides as a nutrient source. Mice without functional BMAL1 genes are deficient in this pathway, and they also demonstrate impaired hunger drive which provides negative feedback against BMAL1:CLOCK activity [28,120]. One notable core clock effector is the nocturnin protein (NOCT), whose level of expression fluctuates throughout the day and communicates between the circadian phase and lipid metabolism [121]. NOCT governs nutrient use by regulating the transcripts of proteins necessary for mitochondrial function and the citric acid cycle [122], and it can modulate a cell's NAD + availability without affecting its redox status or NAD + :NADH ratio [123]. Finally, when the core clock network initiates CRY1 destabilization, the metabolic pathway driven by AMPK is directly affected [49,50]. These are only a few key examples that demonstrate how metabolic feedback systems are braided directly into the circadian network. Dietary Modulation Because nutrient availability is known to entrain circadian phases and modulate metabolic pathways, it stands to reason that dietary modification could be used to affect cancer outcomes. Clinical trials of dietary modulation-caloric restriction, intermittent fasting/time-restricted feeding, and carbohydrate restriction/ketogenic diet-have been of interest to oncologists, and several have resulted in promising results for different cancer populations [124,125]. Restricting food intake to designated time windows has been shown to dramatically reduce serum growth hormone, leptin, and insulin, increasing insulin sensitivity three-fold and selectively rendering tumor cells more susceptible to cytotoxic therapies. In fact, caloric restriction has been shown to enhance radiotherapy for triplenegative breast cancer [126] by increasing tumor control and decreasing metastasis [127,128]. Syngeneic animal and in vitro models have suggested that the synergetic effect happens in a phase-dependent fashion, with radiation given in the nutrient-deprived phase [126]. Supplementing radiation treatment with dietary modulation strategies like timerestricted feeding may improve tumor control by regulating circadian functions. These periods of nutrient deprivation or fasting may in fact mediate their antitumor effects by using the same machinery that allows food intake to act as a secondary zeitgeber, e.g., caloric restriction drives an oxygen-dependent cell environment via pathways that use core clock proteins and other circadian effectors. Although beyond the scope of this review, the microbiome is also an important consideration; gut microbiota have been proposed as a mediator of circadian radiosensitivity [17]. Understanding the biological effects of fasting on circadian function may allow for optimizing radiation response. Since food intake is an important secondary zeitgeber, it follows that a scheduled "zeitgeber diet" could potentiate clock entraining [28,64,121]. Chronoradiotherapy The emerging field of chronoradiotherapy examines biological relationships between circadian regulation and cellular radiation responses in order to improve the therapeutic index of radiation. To date, there is limited preclinical and clinical data that suggest that altering circadian mechanisms could be used to improve outcomes. Preclinical evidence demonstrates that an organism's response to radiation will vary across its circadian period, i.e., model animals demonstrate circadian radioresistance and radiosensitivity [129]. In human xenograft models, chrono-modulated radiotherapy was noted to improve tumor control, and it demonstrated a synergistic effect with other cytotoxic therapies [106]. Although there is little data on the use of primary zeitgebers to alter the radiation response, there are data demonstrating that secondary zeitgebers such as diet may alter the molecular milieu to improve radiation response. Due to the bidirectional relationships between circadian phase, metabolism, and adipocyte activity, it is worth investigating the extent that chronobiology might underlie the preliminary success of interventions like caloric restriction and time-restricted feeding for improving radiotherapy outcomes. To date, at least seventeen clinical studies have demonstrated that the time of radiation can decrease toxicity and improve local control and overall survival [60,130,131] (studies on non-breast cancers are outlined in the Supplementary Table S1). The current data are overall limited and include varying results, but the findings may generate hypotheses for further research. Importantly, none of these studies utilized biomarkers or questionnaires to identify patients' individual circadian phases at the time of their treatments, i.e., they used external time as a proxy for physiologic time. Table 3 outlines the two breast-cancerspecific studies published to date; neither of them, however, constitute actionable clinical recommendations at this time. The retrospective breast cancer study suggested that patients who received doses after 15:00 h had a higher incidence of grade 2 or higher, acute skin toxicity than patients treated before 10:00 h [102]. This seems to contrast the results of the prospective trial, which found that radiation before 12:00 h increased the rate of acute breast erythema versus radiation after 12:00 h [101]. The latter study's preference for later radiation administration is more consistent with the temporal radiotoxicity profiles of cervical, rectal, and esophageal cancer treatments (see Supplementary Table S1); however, the temporal separation between groups in the retrospective study should have been better equipped to detect the effect of time of day, assuming each patient had an ideal, eurhythmic circadian rhythm (see Figure 3). Although there is a paucity of molecular epidemiological data to validate hypotheses, it has been noted that a structural variant of PER3 was associated with the incidence of breast cancer in young women [82] and those that have PER3 variants also have a higher burden of long-term toxicity of treatment. The time-dependence of delayed breast toxicity after radiation seems to depend on PER3 and NOCT alleles. The increased incidence of late erythema for the morning group in the study by Johnson et al. was shown to depend on patients' genotype (p = 0.03), i.e., a single-nucleotide polymorphism in NOCT-a link between circadian rhythms and metabolism-and a variable-number tandem repeat in PER3 [101]. This is yet another example where the status of clock genes has been associated with treatment response, suggesting that future studies should include chronobiologic data like patient chronotypes, as they may impact the results [129]. As more data are collected, attention must be paid to what effects are being studied in each study design, appreciating that effects will likely vary from tissue to tissue. For example, hormone-sensitive cancers may respond differently to chronobiological regulation. One retrospective review found that the time-dependent improvement in response rates for palliative bone irradiation was only observed in female patients, which Chan et al. speculated may be related to differential ratios of sex hormones [27]. It is also important to note that some radiation modalities and fractionation regimens have different effects at the cellular level, and they might therefore be expected to interface with circadian clock effectors differently, e.g., the role of PER1 in double-strand DNA breaks. Chronopharmaceuticals Of the 100 top-selling drugs in the United States in 2014, 56% specifically targeted the product of a circadian gene [58]. There is growing interest in how chronobiology affects a patient's response to medications and how medications can alter the core clock network [132][133][134], particularly in the context of breast cancer (see Table 1). Additionally, circadian rhythms have been known to impact pharmacokinetics significantly [26,114]. This is especially true for agents that act on circadian hormone receptors, e.g., glucocorticoids are more effective if given in the morning, when the body is prepared to receive signaling from the time-dependent spike in endogenous cortisol levels [3]. Extensively researched and detailed by the National Institutes of Health, melatonin remains the only hormone available over the counter as a dietary supplement. If taken orally, serum melatonin levels peak in approximately 1 h after ingestion. Taking exogenous melatonin in the morning will shift the circadian phase later, and melatonin at night shifts it earlier [135]. Complementing this regimen with timed lighting leads to a more profound phase shift: light in the morning and melatonin at night can shift the clock 1.5-2.5 h earlier per day; melatonin in the morning and light at night can shift the clock up to 2.5-3.5 h later per day. Properly timed, exogenous melatonin has been shown to decrease the latency of sleep onset and increase sleep efficiency, especially in patients with a circadian offset or a primary sleep disorder [136]. This suggests that clinical trials that use timed melatonin or prescription melatonergic drugs like tasimelteon and ramelteon for circadian entrainment may also be beneficial for patients with radiotherapy-induced fatigue [137]. There are a few human trials evaluating melatonin as an intervention to decrease the side effects of breast cancer treatment. A phase II trial found that a melatonin emulsion significantly reduced radiation dermatitis [138]. Another prospective phase II trial for women with metastatic breast cancer showed that melatonin improved both subjective and objective sleep quality [139,140]. Additionally, outcomes from various trials have shown improvements in the levels of depression and fatigue in breast cancer patients [141]. Considering the prevalence of comorbid depression among cancer patients, it is worth noting that, of the newer antidepressants researched in a Lancet meta-analysis, the melatonergic agent agomelatine ranked among the most effective and best tolerated, tantamount to fluoxetine [142]. Despite provocative preclinical data and the retrospective and prospective reviews linking circadian disruptions to breast cancer risk, there remains a sparsity of clinical data investigating melatonin as a therapeutic intervention, but research is underway. Though better known for its role as a radioprotective antioxidant, melatonin induces radiosensitization in tumor cells [143]. In vitro studies have shown that melatonin can act synergistically with tamoxifen and aromatase inhibitors [144,145]. In one clinical trial in women with metastatic hormone receptor-negative breast cancer who were no longer eligible for further chemotherapy, patients were randomized to tamoxifen alone versus tamoxifen with melatonin. Partial response rates and one-year survival were significantly higher in the melatonin adjunct group [146]. Interestingly, there has also been preclinical experimentation with novel melatonin-tamoxifen conjugate drugs [147]. Although the field is still in its infancy, combining cytotoxic therapy with pharmaceutical interventions that alter the circadian phase or the function of clock proteins has the possibility of enhancing therapeutic indices, increasing tolerability, and improving breast cancer outcomes. Limitations of Past Research There are important limitations to the existing body of knowledge with regards to how circadian rhythms affect radiation for breast cancer and how entrainment might be used to improve cancer outcomes. There is a paucity of research on chronobiology in clinical medicine, with less than 1% of ongoing clinical trials incorporating time-of-day considerations [52,148]. Several retrospective clinical studies have found associations between time of radiation delivery and outcomes, and few have been prospective in nature (see Tables 3 and S1). Although individual studies demonstrated statistical significance, study designs varied widely, and the differences in how patients were grouped presents a major obstacle to forming a consensus of strong conclusions [131]. It has been suggested that future time-of-day studies should compare groups of patients who received radiation within different narrow time windows that are separated by a few hours [149], rather than dividing groups by arbitrary cutoff times. This would ensure consistent differences in the timing of doses between patients of different groups. Furthermore, clinical radiotherapy data only exist for typical work hours, but it is possible that optimal treatment times occur overnight; bone marrow radiotoxicity is milder during the rest phase of eurhythmic mice, and circulating levels of innate and adaptive immune cells peak at different points of the night in humans [12,150]. Choosing homogeneous study populations may better resolve any time-dependent differences in the therapeutic index. In the setting of breast cancer, data support the notion that histology and breast cancer subtype may influence the constellations of clock gene changes, which would need to be studied to harness the potential of chronoradiobiology. Another limitation in the case of chronoradiotherapy is that the radiation modality being used may affect outcomes because of the core clock's involvement in specific aspects of DNA damage repair. The considerations that could be included in future reviews and clinical studies are listed in Table 4. Prior studies have not measured patients' circadian phase at the time of their radiation (see Supplementary Table S1) but rather have only used external time as a proxy for internal time, which is not an accurate representation, particularly in patients with circadian dysrhythmia [151]. Studies can either evaluate circadian phase in a sleep lab or approximate it using questionnaires or clock gene expression [60,152]. Most studies have been retrospective reviews, in which these measurements are unobtainable. Moreover, no radiation study has considered patients' chronotypes-their "morningness" or "eveningness". Chronotype was in fact shown to correlate with chemotherapy toxicity in women treated for breast cancer. If quantified with a questionnaire, an individual's chronotype could also be used to calculate their expected circadian phase at a given time of day [153,154], for research purposes or for optimal treatment timing. Future Directions Optimizing chronoradiotherapy could provide innovative adjuvant treatment solutions to improve cancer outcomes for our patients. The first step toward true chronoradiotherapy in the clinic would be to have a consensus on the optimal "time of day" for treating a specific disease, e.g., early-stage triple-negative breast cancer. This "time of day" really refers to the circadian phase that would optimize the therapeutic index of radiotherapy. The search for this optimal phase entails important caveats. The hour of maximal tumor radiosensitivity might not be the hour of maximal radioresilience for healthy tissue; if they differ, it will be crucial to determine which is more clinically relevant. Indeed, one might expect tumors to desynchronize from an individual's circadian rhythm, considering that malignant cells often have dysfunctional clock gene expressions. A recent prostate cancer model demonstrated tumor behavior that was in phase with the circadian rhythm of host mice, but desynchronization into non-24-h rhythms has been documented in a variety of human cancers [155,156]. Future research might consider to what extent tumors resynchronize in response to host entrainment. To ensure reproducible results of clinical studies, researchers would have to measure patients' circadian phase at the time of their treatments, rather than relying on the time of day. We have discussed the use of dim-light melatonin onset to define the beginning/end of a person's circadian period. The timing of the morning spike in cortisol and sleep questionnaires are other options, but none of these can measure the circadian phase at any desired instant. Other proposed alternatives include heat-based sensors to track core body temperature, heart rate variability monitors, actigraphy watches to track sleep-activity data, and other sensor-based technologies [58,151]. Peripheral clock gene expression has also been proposed, in which case samples could be collected just before radiation treatment, from blood or possibly hair [152]. However, even if researchers are able to arrive at a consensus about the ideal time window for radiotherapy for a given type of breast cancer, it is not practical to treat every patient at the same time of day; furthermore, we have seen that external time does not always line up with patients' internal time. This fact may obscure our interpretation of the abovementioned time-of-day studies, but clinically we can use it to our advantage. Future studies, using low-cost, low-risk strategies for entraining a person's circadian rhythm to a desired phase, exemplified in Figure 7, and coupled with standard radiation, could improve radiation response. Bright blue light and exogenous melatonin are known to shift the circadian phase. In addition, scheduled feeding has shown even greater efficacy than melatonin supplementation in a rodent model that compared the two methods of entrainment [157,158]. always line up with patients' internal time. This fact may obscure our interpretation of the abovementioned time-of-day studies, but clinically we can use it to our advantage. Future studies, using low-cost, low-risk strategies for entraining a person's circadian rhythm to a desired phase, exemplified in Figure 7, and coupled with standard radiation, could improve radiation response. Bright blue light and exogenous melatonin are known to shift the circadian phase. In addition, scheduled feeding has shown even greater efficacy than melatonin supplementation in a rodent model that compared the two methods of entrainment [157,158]. Figure 7. Hypothetical tracing of therapeutic index of as a function of time of day. Using zeitgebers, pathologic circadian rhythms can be entrained to an appropriate phase, a stronger amplitude, or even an altered period [159]; this provides a means for high-precision chronotherapy that does not rely on the time of day, e.g., zeitgeber-driven chronoradiotherapy. One day, molecular imaging may be able to detect clock phase, and machine learning could be coupled with radiomics mapping to enhance radiotherapy dose painting [48]. These highly technical developments may arise in the future; however, given the potential benefits of chronoradiotherapy for breast cancer, there is value in working toward a technologically simpler intervention that could be safely implemented in clinical trials and eventually implemented at facilities with fewer resources. Zeitgeber Diet Once a consensus has been reached on an optimal circadian phase for treating a given disease with a particular modality, the objective would be to make a patient's internal time align with that phase at the time of their scheduled radiation treatment (see Figure 7). To entrain circadian rhythms for optimal radiation therapy, clinicians could prescribe a set of benign interventions that work together to manipulate the circadian phase while synergizing their beneficial effects, illustrated in Figure 8. Using zeitgebers, pathologic circadian rhythms can be entrained to an appropriate phase, a stronger amplitude, or even an altered period [159]; this provides a means for high-precision chronotherapy that does not rely on the time of day, e.g., zeitgeber-driven chronoradiotherapy. One day, molecular imaging may be able to detect clock phase, and machine learning could be coupled with radiomics mapping to enhance radiotherapy dose painting [48]. These highly technical developments may arise in the future; however, given the potential benefits of chronoradiotherapy for breast cancer, there is value in working toward a technologically simpler intervention that could be safely implemented in clinical trials and eventually implemented at facilities with fewer resources. Zeitgeber Diet Once a consensus has been reached on an optimal circadian phase for treating a given disease with a particular modality, the objective would be to make a patient's internal time align with that phase at the time of their scheduled radiation treatment (see Figure 7). To entrain circadian rhythms for optimal radiation therapy, clinicians could prescribe a set of benign interventions that work together to manipulate the circadian phase while synergizing their beneficial effects, illustrated in Figure 8. The strategies used to entrain an optimal circadian rhythm may offer additional benefits for cancer patients. A pilot trial showed bright light therapy to reduce cancerrelated fatigue and depression [141]. For patients with a non-24-h rhythm, animal models suggest that REV-ERB-targeting agents may provide a promising option for modulating period lengths while also counteracting diet-induced weight gain [160]. Melatonin reinforces bright blue light-based entrainment, but light can override the circadian effects of exogenous melatonin; in this proposed protocol, bright light should be avoided when not indicated. We have discussed the potential anticancer effects of melatonin itself, including roles in breast cancer prevention and treatment as well as amelioration of breast cancer-associated depressive and sleep symptoms [139,140]. Though not as well studied in the context of breast cancer, other melatonergic drugs exist and may have utility in this space, as well as other clock-acting agents like stenabolic (SR9009) [161]. . Proposed components of true chronoradiotherapy. Clinicians would start with a known optimal circadian phase for their patient's specific pathology, i.e., the biological timepoint at which radiation will cause the most tumor damage and the least tissue toxicity. When the patient is scheduled for a radiation treatment, they are assigned a schedule of time-restricted feeding and strictly timed bright blue lighting and melatonin doses, aiming to entrain their circadian phase via properly expressed clock genes so that their optimal phase aligns with the scheduled radiation time. This would enhance the circadian amplitude of peripheral cells and synchronize them to the optimal circadian phase for the patient's scheduled treatment time. Quality measures and research would involve confirming their circadian rhythm at each session. At the cellular level, circadian rhythms are coordinated by the network of core clock proteins (see Figure 2). The strategies used to entrain an optimal circadian rhythm may offer additional benefits for cancer patients. A pilot trial showed bright light therapy to reduce cancer-related fatigue and depression [141]. For patients with a non-24-h rhythm, animal models suggest that REV-ERB-targeting agents may provide a promising option for modulating period lengths while also counteracting diet-induced weight gain [160]. Melatonin reinforces bright blue light-based entrainment, but light can override the circadian effects of exogenous melatonin; in this proposed protocol, bright light should be avoided when not indicated. We have discussed the potential anticancer effects of melatonin itself, including roles in breast cancer prevention and treatment as well as amelioration of breast cancer-associated depressive and sleep symptoms [139,140]. Though not as well studied in the context of breast cancer, other melatonergic drugs exist and may have utility in this space, as well as other clock-acting agents like stenabolic (SR9009) [161]. Recently, dietary modifications like caloric restriction have been shown to improve cancer care outcomes and enhance the effect of radiation, particularly in the notoriously aggressive triple-negative breast cancer [124,125]. For the zeitgeber diet, it would be helpful to determine whether some foods are stronger zeitgebers than others. By planning strategically timed windows of fasting, dietary restriction itself would act to reinforce the melatonergic regimen, while the circadian-entraining aspect of timed feeding could be used to potentiate the effect of timed lighting (e.g., see Supplementary Table S2). Together, we would expect these benign interventions to work in synergy, decreasing radiation toxicity while sensitizing breast tumors. Figure 8. Proposed components of true chronoradiotherapy. Clinicians would start with a known optimal circadian phase for their patient's specific pathology, i.e., the biological timepoint at which radiation will cause the most tumor damage and the least tissue toxicity. When the patient is scheduled for a radiation treatment, they are assigned a schedule of time-restricted feeding and strictly timed bright blue lighting and melatonin doses, aiming to entrain their circadian phase via properly expressed clock genes so that their optimal phase aligns with the scheduled radiation time. This would enhance the circadian amplitude of peripheral cells and synchronize them to the optimal circadian phase for the patient's scheduled treatment time. Quality measures and research would involve confirming their circadian rhythm at each session. At the cellular level, circadian rhythms are coordinated by the network of core clock proteins (see Figure 2). Recently, dietary modifications like caloric restriction have been shown to improve cancer care outcomes and enhance the effect of radiation, particularly in the notoriously aggressive triple-negative breast cancer [124,125]. For the zeitgeber diet, it would be helpful to determine whether some foods are stronger zeitgebers than others. By planning strategically timed windows of fasting, dietary restriction itself would act to reinforce the melatonergic regimen, while the circadian-entraining aspect of timed feeding could be used to potentiate the effect of timed lighting (e.g., see Supplementary Table S2). Together, we would expect these benign interventions to work in synergy, decreasing radiation toxicity while sensitizing breast tumors. Conclusions Understanding the interplay between chronobiology and radiobiology can lead to innovative therapies, which could be applied to improve radiation treatment response. The purpose and organization of circadian rhythms and the network of clock genes that maintain them are integral to understanding the discoveries that have already been made. Epidemiological and biomolecular evidence has linked circadian disruptions to breast cancer, with etiologies including melatonin suppression and impaired DNA damage response systems. Learning to entrain circadian function with timed interventions like intermittent fasting can induce antitumor environments and potentiate the efficacy of radiotherapy, possibly exerting their effect through circadian effectors. Future studies should include biomarkers of circadian phase and the use of zeitgebers to reinforce circadian amplitude and ensure that each patient's circadian rhythm is shifted to a known phase at the time of their scheduled radiation (see Figure 7). Timed lighting, chronopharmaceutical agents, and time-restricted diets are all effective zeitgebers for shifting the circadian phase, but they have never been used in combination with the goal of promoting healthy clock gene expression and priming patients for timedependent radiation treatment (see Figure 8). We emphasize the need for basic science research to direct future clinical studies. Therapeutic radiation is a mainstay of breast cancer treatment, and we strongly advocate for further research that might result in the inclusion of circadian entrainment to promote robust clock function, enhance the therapeutic index of radiotherapy, reduce radiation toxicity, and improve outcomes. Circadian disruption may contribute to the pathogenesis of breast malignancies, but by harnessing targeted circadian rhythm-entraining interventions, chronoradiotherapy may contribute to the development of innovative solutions.
2022-01-28T16:03:49.289Z
2022-01-25T00:00:00.000
{ "year": 2022, "sha1": "90a5edd544aebba64eaaff6e887ce14e135b1959", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/3/1331/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "66b796f082ef47f2700f9d714fd726d2e275c337", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255495251
pes2o/s2orc
v3-fos-license
Alzheimer's Disease, Oestrogen and Mitochondria: an Ambiguous Relationship Hormonal deficit in post-menopausal women has been proposed to be one risk factor in Alzheimer's disease (AD) since two thirds of AD patients are women. However, large treatment trials showed negative effects of long-term treatment with oestrogens in older women. Thus, oestrogen treatment after menopause is still under debate, and several hypotheses trying to explain the failure in outcome are under discussion. Concurrently, it was shown that amyloid-beta (Aβ) peptide, the main constituent of senile plaques, as well as abnormally hyperphosphorylated tau protein, the main component of neurofibrillary tangles, can modulate the level of neurosteroids which notably represent neuroactive steroids synthetized within the nervous system, independently of peripheral endocrine glands. In this review, we summarize the role of neurosteroids especially that of oestrogen in AD and discuss their potentially neuroprotective effects with specific regard to the role of oestrogens on the maintenance and function of mitochondria, important organelles which are highly vulnerable to Aβ- and tau-induced toxicity. We also discuss the role of Aβ-binding alcohol dehydrogenase (ABAD), a mitochondrial enzyme able to bind Aβ peptide thereby modifying mitochondrial function as well as oestradiol levels suggesting possible modes of interaction between the three, and the potential therapeutic implication of inhibiting Aβ–ABAD interaction. Introduction Steroid hormones are molecules, mainly produced by endocrine glands such as the adrenal gland, gonads and placenta, involved in the control of many physiological processes mainly in the periphery, from reproductive behaviour to stress response. In 1981, Baulieu and co-workers were the first to demonstrate steroid production within the nervous system itself [1]. They showed that the level of some steroids, such as dehydroepiandrosterone (DHEA), was even four times higher in the anterior brain of rats than in plasma and nearly 18 times higher than in the posterior brain with regard to its sulphated form (DHEAS). Of note, the level of this steroid remained elevated in the brain even after adrenalectomy and castration. In the following decades, other steroids were identified to be synthetized in situ in the brain, and enzymatic activities of proteins involved in steroidogenesis have been shown in many regions of the central and peripheral nervous system, in neurons as well as in glial cells [2][3][4][5]. Thus, this category of molecules is now called "neurosteroids" and defines neuroactive steroids that are synthetized within the nervous system, independently of peripheral endocrine glands. While steroid hormones act at a distance from their glands of origin in an endocrine way, neurosteroids are synthetized by the nervous system and act on the nervous system in an auto/paracrine configuration. Because of their lipophilic nature, peripheral steroid hormones can freely cross cell membranes, including the blood-brain barrier, and play an important role in the development, maturation and differentiation of the central and peripheral nervous system. However, since some steroids are also synthetized within the nervous system, their blood levels do not necessarily correspond to their brain concentrations [6]. Intra-cerebral steroid synthesis seems to play a role in cognition, anxiety, depression, neuroprotection and even nociception [7]. The ability to cross cellular membranes allows them to act on nuclear receptors exhibiting genomic action by regulating gene transcription. This action seems to be important during neonatal life where it has been shown that neurosteroids, such as progesterone (PROG) or oestradiol, are able to promote dendritic growth, spinogenesis, synaptogenesis and cell survival, particularly in the cerebellum [5]. Some studies already demonstrated the role of neurosteroids, particularly oestrogens, in the regulation of glucose homeostasis and lipid metabolism [8] as well as in neuroprotection [9]. Risk for Alzheimer's disease (AD) is associated with age-related loss of sex steroid hormones in both women and men [10,11]. On the one hand, in postmenopausal women, the precipitous depletion of oestrogens and progestogens is hypothesized to increase susceptibility to AD pathogenesis, a concept largely supported by epidemiological evidence but refuted by some clinical findings, above all, by results from the "Woman's health initiative memory study" (WHIMS) (please see detailed discussion in the "Conclusion" section). On the other hand, a growing body of evidence indicates a more gradual age-related decline in testosterone in men similarly associated with increased risk to several diseases including AD. Since testosterone is at least in part aromatized in the brain to 17β-oestradiol, a loss of it may also affect oestrogenmediated neuroprotective pathways. But also, the difference between how rapidly and significantly the female versus male primary sex hormones decline might partially contribute to higher AD incidences in women than in men [10]. Alzheimer's Disease, Oxidative Stress, Effect of Gender and Neogenesis of Neurosteroids AD is a neurodegenerative brain disorder and the most common form of dementia among the elderly as shown by the worldwide prevalence of the disease which was 26.6 million people in 2006 [12]. Clinical symptoms are characterized by severe and progressive loss of memory, language skills as well as spatial and temporal orientation. From a cellular point of view, the pathological hallmark of AD is the presence of extracellular senile plaques-composed of aggregated amyloid-β peptide (Aβ)-and intracellular neurofibrillary tangles (NFT)-consisting of aggregates of abnormally hyperphosphorylated tau protein. A lot of efforts have been made during the last years to understand the pathogenesis of the disease, particularly the role of AD key proteins, Aβ and tau, in oxidative stress and mitochondrial dysfunction [13]. Epidemiological and observational studies demonstrated a higher prevalence and incidence of AD in women even after adjusting for age-about two thirds of AD patients are female-as well as a greater vulnerability to the disease [14]. Thus, at early stages of neurofibrillary tangle development, women exhibit greater senile plaque deposition than men [15], and AD pathology is more strongly associated with clinical dementia in female patients than in male [16]. The drop of oestrogen levels after menopause was proposed to be one explanation to this phenomenon. However, there is little information concerning changes of steroid levels in the human brain during ageing and under dementia conditions. As steroids present in nervous tissues originate from the endocrine glands (steroid hormones) and from local synthesis (neurosteroids), changes in blood levels of steroids with age do not necessarily reflect changes in their brain levels. The concentrations of a range of neurosteroids have recently been measured in various brain regions of aged AD patients and aged non-demented controls including both genders by the very sensitive GC/MS methods [6]. Schumacher and colleagues showed a general trend towards lower level of steroids including oestrogen in AD patients compared to controls. Notably, neurosteroid levels were negatively correlated with Aβ and phospho-tau in some brain regions [6]. Another study using radioimmunoassay for steroid quantification demonstrated a decrease in oestrogen level in postmortem brain from female AD patients aged 80 years and older but no significant difference in the 60-79-year age range compared to non-demented women [17]. However in men, an age-dependent decrease of androgen level was observed in the brain of non-demented subjects, which was even more pronounced in the brain of male AD patients [17]. Whereas large studies investigating systematically gender differences with respect to Aβ and or tau pathology in post-mortem brain tissue from AD patients are missing, broad evidence emerged from transgenic mice models of AD indicating an increased Aβ load burden and plaque number in the female brain compared to age-matched male mouse brain [11,18]. Of note, consistent findings on greater Aβ burden in females were found in different animal AD models: Tg2576 (APPSWE) mice [19], APP/PS1 [20], APP23 [21], as well as in triple transgenic mice, like 3xTg-AD mice [18,22] and triple AD mice ( [23], with respect to gender differences: unpublished observations). On the basis that the estrous cycle in female mice is constantly repeated until approximately 11 months of age and becomes irregular between 12 and 14 months, the data demonstrating a significant enhancement of Aβ load in important brain regions like the hippocampus from the female after the age of 11 months are striking. Regarding tau pathology, no gender differences have been observed in the latter triple AD models. In agreement, NFT formation in Aβ-injected tau transgenic mice (P301L) did not vary with gender [24]. Even though one single publication reported an enhanced neurofibrillary pathology in female TAPP mice [25], all together, these results point to the involvement of the Aβ pathway, rather than the tau pathway, in the higher risk of AD in women. Interestingly, further supporting evidence comes from oxidative stress studies. Previous research of our group [26] demonstrated a gender-specific partial up-regulation of antioxidant defence in post-mortem brain regions from female compared to male AD patients further indicating that oxidative damage is caused rather by overproduction from reactive oxygen species (ROS) than by insufficient detoxification of ROS. Since mitochondria represent the major source of ROS, the findings from Lloret and co-workers are of specific interest showing that brain mitochondria from old female rats produce higher levels of ROS after exposure to Aβ than age-matched brain mitochondria from male rats [27]. A selection of studies attested neuroprotective effects of neurosteroids against AD-related cellular and mitochondrial injury, but the underlying mechanisms are still poorly understood. Findings of our group corroborated that AD key proteins and oxidative stress are themselves able to modify neogenesis of neurosteroids in a cellular AD model [28,29] (Fig. 1). In fact, treatment of human SH-SY5Y neuroblastoma cells with H 2 O 2 for 24 or 48 h led to a decrease of oestradiol synthesis. This was paralleled by an increased cell death compared to untreated controls and a down-regulation of the expression of aromatase, an enzyme responsible for oestradiol formation from testosterone. Interestingly, cell death was also observed after inhibition of aromatase by treatment with letrozole, suggesting that endogenous oestradiol formation plays a critical role in cell survival. Furthermore, if cells were pre-treated with oestradiol, it was possible to protect them against H 2 O 2 and letrozole-induced cell death. In agreement, a similar protective effect of oestradiol was observed in stress condition experiments treating the same cell line with heavy metals, such as cobalt and mercury [30]. In addition, modulation of neurosteroid production was observed in SH-SY5Y cells overexpressing the human amyloid processor protein (APP) or human tau protein [28]. Indeed, overexpression of human wild-type Tau (hTau 40) protein induced an increase in the production of PROG, 3αandrostanediol and 17-hydroxyprogesterone, in contrast to overexpression of the abnormally hyperphosphorylated tau bearing the P301L mutation which led to a decrease in the production of these neurosteroids. In parallel, a decrease of PROG and 17-hydroxyprogesterone production was observed in cells expressing human wild-type APP (wtAPP), whereas 3α-androstanediol and oestradiol levels were increased. These results provided first evidence that AD key proteins are able to modulate, directly or indirectly, the biological activity of the enzymatic machinery producing neurosteroids. These findings were further confirmed by in vitro experiments using native SH-SY5Y cells treated with aggregated Aβ 1-42 peptide for 24 h [31]. Since APPwt SH- Fig. 1 Main biochemical pathways for neurosteroidogenesis in the vertebrate brain. Boxes represent neurosteroids which are sensitive to modulation by AD key proteins, Aβ and/or tau. Mitochondrial 17β-HSD (marked by *) is equivalent to the ABAD in mitochondria. PREG pregnenolone, PROG progesterone, 17OH-PREG 17-hydroxypregnenolone, 17OH-PROG 17-hydroxyprogesterone, DHEA dehydroepiandrosterone, DHP dihydroprogesterone, ALLOPREG allopregnanolone, DHT dihydrotestosterone, P450scc cytochrome P450 cholesterol side chain cleavage, P450c17 cytochrome P450c17, 3β-HSD 3β-hydroxysteroid dehydrogenase, 5α-R 5α-reductase, Arom. aromatase, 21-OHase 21-hydroxylase, 3α-HSOR 3α-hydroxysteroid oxydoreductase, 17β-HSD 17βhydroxysteroid dehydrogenase SY5Y cells secrete Aβ levels within nanomolar concentration range, treatment of native SH-SY5Y cells using a "nontoxic" concentration range (100-1,000 nM, non-cell deathinducing Aβ 1-42 concentrations) revealed an increase in oestradiol production, whereas toxic Aβ 1-42 concentrations within the micromolar range, leading to cell death, strongly reduced oestradiol levels. Modulation of steroid production was also shown in other cell lines, for example in oligodendrocytes, where DHEA production is up-regulated under oxidative stress condition induced by treatment with Aβ peptide or Fe 2+ [32]. Interestingly, similar results were found in Alzheimer patients where DHEA was significantly elevated in brain and cerebrospinal fluid when compared to control subjects [33]. Finally, several reports propose the role of allopregnanolone (3α, 5α-THP) as a plasmatic biomarker for AD, since it was shown that the level of this neurosteroid is decreased by 25 % in the plasma of demented patients compared with control subjects [34,35]. The fact that the ability to produce neurosteroids is conserved in the vertebrates' evolution suggests that this category of molecules is important for living beings. Thus, we could speculate that the modulation of their biosynthesis plays an important role in the pathophysiology of neurodegenerative disorders, such as AD. Evidence of Neuroprotective Action of Steroids in Cellular and Animal Studies Neuroprotective effects of neurosteroids against a variety of brain injuries have already been described for many years. Numerous studies with the focus on oestrogens showed that these molecules are able to enhance cerebral blood flow, prevent atrophy of cholinergic neurons, and modulate the effects of trophic factors in the brain [36]. Oestrogens are a group of compounds known for their importance in the estrous cycle including oestrone (E1), oestradiol (E2), and oestriol (E3). Oestradiol is about ten times as potent as oestrone and about 80 times as potent as oestriol in its oestrogenic effect. Oestradiol is also present in males, being produced as an active metabolic product of testosterone. The serum levels of oestradiol in males (14-55 pg/mL) are roughly comparable to those of post-menopausal women (<35 pg/mL). Oestradiol in vivo is interconvertible with oestrone, oestradiol to oestrone conversion being favoured; however, evidence of metabolism is mainly derived from the periphery. Animal studies, especially in rodents and transgenic mice models for AD, seem to confirm positive effects of oestrogen treatment. It has been shown that a treatment with oestrogen in mice expressing mutations in human APP (Swedish and Indiana mutation) had an impact on APP processing decreasing Aβ levels and so its aggregation into plaques [37]. Mechanisms underlying this action of oestrogen are still poorly understood, but as discussed by Pike et al. [11], it seems that oestrogen amongst others is able to promote the α-secretase pathway (non-amyloidogenic, meaning non-Aβ producing) via activation of extracellular-regulated kinase 1 and 2 (ERK 1 and 2) and through the protein kinase C (PKC) signalling pathway. In triple transgenic AD mice, depletion in sex steroid hormones induced by ovariectomy in adult females increased significantly Aβ accumulation and had a negative impact on cognitive performance [18,38]. Treatment of these ovariectomized mice with oestrogens was able to prevent these effects vice versa. Of note, when PROG was administrated in combination with oestrogens, the beneficial effects on Aβ accumulation were blocked but not on cognitive performance. However, oestrogen and PROG both can modulate kinase and phosphatase activity involved in tau phosphorylation, especially the glycogen synthase kinase-3β (GSK-3β). Thus, oestrogen can induce the phosphorylation of GSK-3β which inactivates the enzyme and reduces tau phosphorylation, whereas PROG can decrease the expression of tau and GSK-3β [11,39]. This suggests that oestrogen and PROG not only can interact to regulate APP processing and tau phosphorylation but can also act independently on different AD pathways. Cognitive effects of PROG were confirmed in mice bearing the Swedish double mutation of APP and mutant preseniline 1 (APPswe+PSEN1Δ9 mutant mice) which showed decreased hippocampally mediated cognitive performances compared to non-transgenic littermates [38]. In this AD mouse model, PROG was able to improve the cognitive performance in tasks involving the cortex but not in those involving the hippocampus. Besides, APPswe+PSEN1Δ9 mice presented decreased 3α, 5α-THP levels (metabolite of PROG) in the hippocampus, compared to wild-type mice, suggesting that deficits in hippocampal function may be due, at least in part, to reduced capacity to form 3α, 5α-THP in the hippocampus. Furthermore, a more recent study supported the role of 3α, 5α-THP in triple transgenic mice model of AD (3xTgAD) by showing reduced Aβ generation in the hippocampus, cortex and amygdala, coupled with an increased cellular regeneration after treatment with 3α, 5α-THP [40]. At the cellular level, oestrogen binds to nuclear receptors, such as oestrogen receptor α and β (ER α/β), and acts as transcription factor. It enhanced the expression of antiapoptotic proteins, such as Bcl-2 and Bcl-xL, and downregulated the expression of Bim, a pro-apoptotic factor, preventing the initialisation of cell death programme by mitochondria [11,41]. Another way that oestrogen can protect cells from apoptosis is the activation of antioxidant defence systems by up-regulating the expression of manganese superoxide dismutase (MnSOD) and glutathione peroxidase [42]. Thus, oestrogen can have direct antioxidant effects by increasing reduced glutathione levels and decreasing oxidative DNA damage in mitochondria, as observed in a study using ovariectomized female rats [43]. Of note, oestrogen can also modulate the redox state of cells by intervening with several signalling pathways, such as mitogen-activated protein kinase (MAPK), G protein-regulated signalling, NFκB, c-fos, CREB, phosphatidylinositol-3-kinase, PKC and Ca 2+ influx [41,44]. On the basis of this complex mode of action, oestrogens not only seem to be able to decrease oxidative stress markers, including lipid peroxidation, protein oxidation and DNA damage, but can also directly act on the regulation of mitochondrial function [42]. Neurosteroids and Mitochondria: Focus on Potential Protective Effects of Oestrogen Against Aβ-Induced Toxicity Mitochondria are the "powerhouses of the cell", providing the main part of cellular energy via ATP generation, which is accomplished through oxidative phosphorylation from nutritional sources [45]. They control cell survival and death by regulating both energy metabolism and apoptotic pathways and contribute to many cellular functions, including intracellular calcium homeostasis, alteration of the cellular reduction-oxidation potential, cell cycle regulation and synaptic plasticity [46]. Mitochondrial dysfunction has been proposed as an underlying mechanism in the early stages of AD [47,48]. We recently summarized evidence from ageing and Alzheimer models showing that the harmful trio "ageing, Aβ and tau protein" triggers mitochondrial dysfunction through a number of pathways, such as impairment of oxidative phosphorylation, elevation of reactive oxygen species production and interaction with mitochondrial proteins, contributing to the development and progression of the disease [13,49]. Mitochondria and neurosteroidogenesis are also closely linked since mitochondria contain the first enzyme involved in steroidogenesis, the cytochrome P450 cholesterol side chain cleavage enzyme (P450scc) located at the inner side of the mitochondrial membrane which is responsible for the conversion of cholesterol to pregnenolone (PREG). The first step of neurosteroidogenesis is the transfer of cholesterol from the outer to the inner mitochondrial membrane. It is also the rate-limiting step in the production of neurosteroids because the ability of cholesterol to enter into mitochondria to be available to the P450scc will determine the efficiency of steroidogenesis [50]. Free cholesterol accumulates outside of mitochondria and binds to the steroidogenic acute regulatory protein, a hormone-induced mitochondriatargeted protein that initiates cholesterol transfer into mitochondria. Then, molecules are transported inside mitochondria by a protein complex including translocator proteins (TSPO), a cholesterol-binding mitochondrial protein also known under the name of peripheral-type benzodiazepine receptor, which permits cholesterol transfer into mitochondria and subsequent steroid formation. It has been shown that TSPO is up-regulated in the postmortem brain of AD patients, resulting in an increased level of PREG in the hippocampal region of those brains [50]. Interestingly, the level of 22R-hydroxycholesterol, a steroid intermediate in the conversion of cholesterol to PREG, was found at lower levels in the AD brain compared to the control, which suggests that TSPO does not function normally in Alzheimer patients [33,51]. From an energetic point of view, it is known that steroids such as oestrogen can regulate mitochondrial metabolism by increasing the expression of glucose transporter subunits and by regulating some enzymes involved in the tricarboxylic acid cycle (TCA cycle) as well as glycolysis, such as the hexokinase, phosphofructokinase, pyruvate and malate dehydrogenase [41,52], which leads to improved glucose utilization by cells [11,44] (Fig. 2). Oestrogens seem to be able to up-regulate genes coding for some electron transport chain components present in nuclear and in mitochondrial DNA [53,54]. In fact, an oestrogen-induced increased expression of some subunits of mitochondrial complex I (CI), cytochrome c oxidase (complex IV or CIV) and F1 subunit of ATP synthase was observed [41,42,52]. Furthermore, treatment of ovariectomized female rats with oestradiol induced an increase of mitochondrial respiratory function in the brain with regard to an enhancement of O 2 consumption coupled to an increased activity of cytochrome c oxidase [53]. Thus, oestrogen seems to enhance the general metabolism in cells, but besides, it seems also able to directly protect mitochondria against oxidative stress-induced injury [52]. Thus, incubation of isolated mitochondria from the rat brain with oestradiol leads to a decrease of H 2 O 2 production by this organelle coupled with an increase of the mitochondrial membrane potential (MMP). Furthermore, it has been proposed that its phenolic A ring could allow oestradiol to intercalate into the mitochondrial membrane and to avoid lipid peroxidation occurring in stress condition [54], which could be responsible for the stabilization of the MMP. Moreover, oestradiol seems to prevent the release of cytochrome c by mitochondria (a mechanism known to induce apoptosis of cells by activating the caspase cascade in the cytoplasm), a mechanism increasing the efficiency of the respiratory chain [52]. Finally, another oestrogen signalling pathway avoiding the negative effects of oxidative stress is the one regulating calcium homeostasis by inducing mitochondrial sequestration of cytosolic calcium [42,54]. In fact, an imbalance of calcium regulation can lead to an increase of ROS production by activating the enzyme nitric oxide synthase, which can in turn sensitize neural cells to oxidative damage. It has been shown that an oestradiol treatment of primary hippocampal neurons was able to potentiate glutamatergic response via NMDA receptor which resulted in an increased influx of calcium in cells. This effect was coupled to an induction of mitochondrial sequestration of cytosolic calcium and an increase of the mitochondrial calcium load tolerability thereby avoiding calcium-induced excitotoxicity as well as promoting cell survival. Taken together, all those different findings indicate that oestrogen might be able to compensate deficits and injuries that occur in AD, namely mitochondrial respiration impairments, enhanced ROS production, excitotoxicity and, more generally, metabolic deficits (Fig. 2). More recently, new light has been shed on a mitochondrial enzyme that is able to directly bind Aβ peptide and in which one of the main substrate is 17β-oestradiol [55]. This enzyme is known under the name of 17β-hydroxysteroid dehydrogenase type 10 (17β-HSD) or Aβ-binding alcohol dehydrogenase (ABAD). ABAD, Oestradiol and Aβ-Induced Mitochondrial Impairment ABAD belongs to the alcohol dehydrogenase family, and it is responsible for the reversible oxido/reduction of several substrates including linear alcohols and steroids, such as 17βoestradiol, using NAD + as cofactor [56]. Under normal conditions (without Aβ), this enzyme plays a role in the regulation of metabolic homeostasis, and its overexpression improved cell viability and ATP content [57]. It has been shown that ABAD is up-regulated in brains of AD mice as well as AD patients [57,58], and it has been suggested that the binding of Aβ changes the conformation of the enzyme, which seems to exacerbate mitochondrial dysfunction induced by Aβ. Fig. 2 Modulation of mitochondrial function by Aβ, hyperphosphorylated tau and oestradiol. In AD, mitochondrial dysfunction was found to be a central pathological mechanism which occurs already at early stages of the disease. On one hand, studies showed that amyloid-β peptide (Aβ) can be responsible of metabolic impairments, such as the decrease of glucose consumption observed in the AD brain as well as the calcium-induced excitotoxicity in neurons. It has been found that hyperphosphorylated tau and Aβ are able to impair mitochondrial respiration by inhibiting the ETC CI and CIV, respectively, inducing decreased oxygen consumption, decreased ATP production and increased ROS level. This oxidative stress induced by ETC dysfunction can surpass cellular and mitochondrial scavenger (MnSOD, Cu/ ZnSOD) and impacts on MMP as well as mitochondrial DNA (mtDNA). On the other hand, it has been shown that oestradiol can increase glucose utilization by cells as well as ETC activity, stabilize the MMP and prevent ROS production and calcium-induced excitotoxicity. In the graph, E 2 designates where oestradiol potentially acts on mitochondria to compensate Aβ-induced toxicity. In turn, Aβ seems to be able to impact oestradiol metabolism in mitochondria, since it can be directly linked to the mitochondrial enzyme ABAD and possibly modulates its enzymatic activity (such as the reversible conversion of oestradiol to oestrone) and non-enzymatic activity (mitochondrial RN-Ase P). ABAD Aβ-binding alcohol dehydrogenase, CI complex I, CII complex II, CIII complex III, CIV complex IV, CV complex V, cyt c cytochrome c, Cu/Zn SOD copper/zinc superoxide dismutase, MnSOD manganese superoxide dismutase, TCA tricyclic acid, E 2 oestradiol, ROS reactive oxygen species, mtDNA mitochondrial DNA, ER oestrogen receptor More recently, studies performed in transgenic mice models of AD showed that behavioural stress or depletion of ovarian hormones by ovariectomy exacerbated mitochondrial dysfunction, aggravated plaque pathology and increased ABAD expression in the brain [59,60]. Furthermore, double transgenic mice overexpressing mutant APP and ABAD present an earlier onset of cognitive impairment and histopathological changes when compared to APP mice [49], suggesting that Aβ-ABAD interaction is an important mechanism underlying Aβ toxicity. This hypothesis is supported by a study from Yao and collaborators who recently showed that inhibition of Aβ-ABAD interaction by a decoy peptide can restore mitochondrial deficits (activity of mitochondrial respiratory complexes, ROS level) and improve neuronal and cognitive function [60]. New interesting findings of our group seem to go in the same way with regard to the use of a novel small ABADspecific compound inhibitor (AG18051) by investigating the role of this enzyme in Aβ toxicity in human SH-SY5Y cells treated for 5 days with Aβ 1-42 0.5 uM [61]. The crystal structure of human ABAD in presence of AG18051 showed that the inhibitor formed a covalent link with the NAD + cofactor and occupied the substrate-binding site of the enzyme [62]. Thus, the inhibitor was able to prevent Aβinduced cell death and significantly normalized metabolic functions impaired by Aβ, such as cytosolic and mitochondrial ROS as well as mitochondrial respiration. Furthermore, it was able to restore oestradiol levels which were reduced after treatment with Aβ [31,61]. What is interesting to note is that the apparent protective effects of the ABAD inhibitor seem to be independent on its interaction with Aβ. In fact, a 24-h pre-treatment with AG18051, before the incubation of cells with Aβ 1-42 , was sufficient to prevent cell death, normalize ROS production and restore mitochondrial respiration. Regarding oestradiol level, we previously showed that it decreased in the cytosol and increased in isolated mitochondria of SH-SY5Y cells after 5 days of treatment with Aβ [49]. The ABAD inhibitor normalized the oestradiol level in the cytosol [61], and preliminary data of our group suggest a similar effect in isolated mitochondria (unpublished data). Thus, we propose the following model of mode of action: ABAD inhibitor is able to block Aβ toxicity by changing ABAD configuration, which disables the binding of Aβ thus preventing its toxic effects (Fig. 3). The action of ABAD on the electron transport chain (ETC) is still unclear, but the potential role of ABAD as mitochondrial RNAse P directly links ABAD to the production of mitochondrial ETC proteins and ROS generation [63]. Notably, AG18051 was able to normalize also this function of ABAD since mitochondrial respiration was restored, but the underlying mechanisms still remain unclear [61]. Thus, the interplay between ABAD, oestradiol and mitochondria may be a very interesting lead to follow in the future to decode Aβ-induced mitochondrial toxicity and explore therapeutic strategies of ABAD inhibition. Conclusion It is still debated whether oestrogen treatment after menopause could result in improved cognitive function in women. This debate is based on many animal and cell culture data showing that oestrogens can positively affect the ageing and AD brain. It was recognized from former studies that oestrogen depletion in post-menopausal women represents a significant risk factor for the development of AD and that an oestrogen replacement therapy may decrease this risk and even delay disease progression [64,65]. However, large treatment trials showed negative effects of long-term Fig. 3 Aβ, ABAD and mitochondria: modes of interactions. a Under normal conditions, ABAD is responsible of the reversible oxido/reduction of linear alcohols and steroids, such as the reversible conversion from oestradiol to oestrone. Its potential function as an RNAse P could also be important for the good functioning of the mitochondrial ETC. b Under AD-relevant pathological conditions, Aβ can directly bind the mitochondrial enzyme ABAD, changing the configuration of the enzyme which seems to inhibit its activity and creates an imbalance between oestradiol and oestrone. Aβ-induced ABAD misfolding can impact ETC functioning and increase, directly or indirectly, ROS production, which lead to cell death. c In the presence of AG18051 (AG), the binding of Aβ to ABAD is inhibited, normalizing oestradiol level, ROS production, ETC activity, and improves cell survival. ABAD Aβ-binding alcohol dehydrogenase, IMM inner mitochondrial membrane, OMM outer mitochondrial membrane treatment with oestrogens in older women. Above all, results from the WHIMS including 4,532 post-menopausal woman aged over 68 years indicated a twofold increase in dementia after 4.2 years of hormonal treatment (p.o. treatment with premaxin plus medroxyprogesterone). In addition, the study indicated potential risks for breast cancer, pulmonary embolism and stroke [66,67]. Some attribute this failure to the synthetic nature of the hormones used in the WHIMS trial, since in vitro studies support a beneficial role of oestradiol and progesterone, but not of medroxyprogesterone used in the WHIMS [68,69]. Of note, medroxyprogesterone is not metabolized to 3α, 5α-THP and can inhibit conversion of PROG to 3α, 5α-THP [70]. Similarly, oestradiol, PROG or 3α, 5α-THP, but not medroxyprogesterone, showed beneficial effects in ageing, seizure, cortical contusion, ischaemia and diabetic neuropathy models [38]. Another theory which tries to explain trial failure is the "critical window hypothesis", asking for the critical period where oestrogen might exert a neuroprotective effect [71]. This hypothesis is substantiated by animal research, e.g. mice which have undergone ovariectomy, but in which oestrogen treatment was delayed substantially by months (the equivalent of years in human terms), did not benefit by this, as the animals did which received treatment immediately after ovariectomy [72]. However, a recent metaanalysis [73] indicated, contrary to expectations, that age of women and duration of time relapsed when treatment was initiated since menopause did not significantly affect treatment outcome. Thus, natural oestradiol (E2) without a progestagen should represent the preferred treatment [73]. Furthermore, the oral route of drug delivery, being noninvasive in nature, is by far the most convenient and preferred route of administration in any acute or chronic treatment. Though oestradiol itself from conventional oral oestradiol formulations has the ability to cross the blood-brain barrier (BBB) and reach the brain, but a large oral dose is required to achieve therapeutic levels of oestradiol due to its nonspecificity for the brain. This non-specificity increases the peripheral drug burden and subsequently potentiates the risk of peripheral adverse effects. Furthermore, with specific regard to the brain-specific action of oestradiol as a neurosteroid, independently of its action in the periphery, other modes of administration (cyclical, nasal, polymer nanoparticles for oral delivery) need to be sought and investigated [74]. Alternatively, the true potential of phyto-oestrogens, like the soy isoflavones genistein, daidzein and glycitein, which activate the same neuroprotective pathways than oestrogens but with weak oestrogenic cellular effects that might be responsible for the lower prevalence of AD in Japanese living in their ethnic homeland compared to Japanese living in the USA [75], to beneficially modify disease processes should be studied in clinical trials [27]. In addition, the field could strongly benefit from the successful development of oestrogen derivates that have no unfavourable oestrogenic side effects. The successful use of oestrogen or oestrogen-analogue therapies to delay, prevent and/or treat AD will require additional research to optimize key parameters of therapy. In this context, the interplay between ABAD, oestradiol and mitochondria and accordingly ABAD inhibition might represent a further interesting lead to follow in the future. Knowledge acquired from these studies will eventually be applied to unravel the pathophysiology and to inform prevention and intervention strategies of AD.
2023-01-07T14:24:43.739Z
2012-06-08T00:00:00.000
{ "year": 2012, "sha1": "d21ce6256c65641311b9dd4b55b833912d82ab87", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12035-012-8281-x.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "d21ce6256c65641311b9dd4b55b833912d82ab87", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
245447300
pes2o/s2orc
v3-fos-license
Safety and Efficacy of Extended Interval Dosing for Immune Checkpoint Inhibitors in Non–Small Cell Lung Cancer During the COVID-19 Pandemic Introduction Extended interval (EI) dosing for immune checkpoint inhibitor (ICI) mono- or consolidation therapy initiated due to the COVID-19 pandemic led to a significant reduction in ICI-related site visits for patients with stage III and IV non–small cell lung cancer. Here we report the safety and efficacy compared to standard dose (SD) schedules. Method In this retrospective analysis, patients who received ICI mono- or consolidation therapy, or adjuvant ICI therapy were assessed. Safety and efficacy of EI dosing with data of SD schedules were compared. Results One hundred seventeen patients received EI dosing for ICI and 88 patients SD. Patient characteristics were comparable. We observed 237 adverse events in the EI dosing cohort versus 118 in the SD group (P= .02). Overall, there was no difference in the occurrence of grade ≥3 adverse events (EI dosing: 21/237 [8.9%]; SD group: 20/118 [17.0%], P = .42), except for the pembrolizumab EI dosing cohort. Of all patients who received an EI dosing schedule, however, only 8 (6.8%) were reduced to SD because of toxicity. In 5 (4.3%) patients ICI was permanently stopped because of severe toxicity compared to 11 (12.5%) discontinuations in the SD group. Short-term treatment interruption occurred with similar frequencies in both groups. Progression-free survival and overall survival were comparable in patients receiving pembrolizumab and in those receiving adjuvant durvalumab. Progression-free survival and OS were better in the EI dosing cohort of nivolumab. Conclusion EI dosing for ICI did not lead to an increase of clinically relevant toxicities resulting in dose reduction and/or treatment discontinuation. Efficacy of EI dosing of pembrolizumab and durvalumab were comparable to SD. Based on our safety and efficacy data EI dosing for ICI seems a safe and effective strategy. Introduction The COVID-19 pandemic forced oncologists to cut down faceto-face patient contacts, thereby reducing the risk of exposure to the virus and reallocating resources to provide the necessary care for COVID-19 patients. As alternative for keeping up our oncology services for stage III and IV non-small cell lung cancer (NSCLC), extended interval (EI) dosing for immune checkpoint inhibitor (ICI) was used for mono-and consolidation therapy. However, the question arises whether EI dosing will have impact on the safety and efficacy of ICI. Awaiting data from randomized controlled trials (ClinicalTrials.gov NCT04295863), we performed a retrospective cohort study to assess the safety and efficacy of EI dosing for ICI and compared the results to standard dose (SD) schedules in a realworld NSCLC population. Patients and Treatment Between January 1, 2019 and June 1, 2021 all consecutive patients with stage III/IV NSCLC treated at the University Medical Center Groningen with mono-ICI, ICI + chemotherapy or adjuvant ICI were enrolled. Standard dosing (SD) was compared with EI schedules. EI dosing was introduced by March 1, 2020. SD was defined as pembrolizumab mono-or consolidation therapy (the latter ± pemetrexed, after 4 cycles ICI + chemotherapy) every 3 weeks at a dose of 200 mg, nivolumab every 2 weeks at a dose of 240 mg and durvalumab every 2 weeks at a dose of 10 mg/kg. After at least 2 cycles of ICI standard dose (SD) without clinically relevant toxicity the dose was escalated (EI dosing) to pembrolizumab 400 mg every 6 weeks, 1 nivolumab 480 mg every 4 weeks 2 and durvalumab 1500 mg every 4 weeks. 3 All patients already receiving ICI monotherapy on March 1, 2020 without clinically relevant toxicity were escalated to the EI dosing schedule. Otherwise, patients received the EI schedule after 2 cycles of standard treatment without clinical toxicity. Those receiving pembrolizumab-pemetrexed consolidation therapy were either continued on the combination, or after discontinuation of pemetrexed the pembrolizumab was escalated to the EI dosing schedule. Assessment of Safety and Efficacy Safety and efficacy between both groups were compared. Adverse events (AE) were assessed by CTCAE 5.0. We report numbers of AEs at any moment during treatment [ Total events ] and AEs occurring in the escalation window [ Escalation window ]. In the EI dose cohort start of the escalation window is the actual moment of schedule adaptation. In the SD cohort, start of the escalation window was defined as the moment on which patients would have been escalated from SD to the extended dose interval. Progression-free survival (PFS) was defined as the time from treatment start until first evidence of tumor progression or until death from any cause, whichever comes first. Overall survival (OS) was defined as the time from treatment start to death from any cause. Statistics The primary outcome of this analysis is safety. The difference of total AE frequency per dosing group (EI dosing vs. SD) overall and in the treatment groups (adjuvant durvalumab, nivolumab, pembrolizumab, pembrolizumab + chemotherapy) was assessed using the Mann-Whitney U test. Clinical outcome (explorative analysis) was evaluated at patient level by means of PFS and OS. The relationship between ICI dosing cohort and survival was explored by Kaplan-Meier survival plots and differences were assessed by using the log-rank test. Due to the planned schedule -escalation to EI dosing only after receiving two cycles of SD without early clinically relevant toxicity or early progression -a selection bias in terms of survival was introduced. To correct for this bias in the survival analysis, patients with early progression in the SD cohort were excluded from this analysis. Results Two hundred five patients were included ( Figure 1 ). Patient characteristics were similar between patients receiving SD (n = 88) and EI dosing (n = 117), except that in the SD cohort more patients were treated with nivolumab and less patients were treated with durvalumab ( Table 1 ). From those receiving SD, 67 patients were fully treated before March 1, 2020. The remainder 21 patients were not escalated due to progression of disease before the escalation window (n = 8), early AEs (n = 10), or logistic reasons (n = 3). We observed a total of 237 AEs in the EI dosing cohort versus 118 in the SD group ( P = .02; Table 1 ). Of these events, 21/237 (8.9%) and 20/118 (17.0%), respectively, were CTCAE grade 3 or higher ( P = .42). Of all events, 46.4% in the EI dosing cohort and 58.5% in the SD cohort occurred in the escalation window . Only in the pembrolizumab EI dosing cohort, more AEs were observed compared to SD ( P = .02), which however did not result in an increased number of grade ≥3 events ( Table 2 ) or events leading to treatment interruption or discontinuation ( Table 3 ). Of the 117 patients receiving EI dosing schedule, only 8 (6.8%) patients were reduced to SD because of toxicity, and in 5 patients (4.3%) ICI was permanently stopped because of toxicity, compared to 11 (12.5%) in the SD group. Short-term treatment interruption occurred with similar frequencies in both groups (15.4% vs. 13.6%). PFS and OS were comparable between both dosing groups ( Figure 2 ) • By treatment schedule Chemotherapy + ICI 1 - Pembrolizumab monotherapy 5 a - Nivolumab monotherapy -2 Durvalumab adjuvant 5 3 • By PD-L1 expression PD-L1 ≥ 50% 5 1 PD-L1 < 50% 3 3 PD-L1 not assessed 3 1 * Percent of all patients in the standard dose (n SD = 88) and the EI dosing cohort (n EI = 117). Each cohort includes patients receiving pembrolizumab monotherapy (n SD = 30 and n EI = 35), pembrolizumab consolidation therapy (n SD = 11 and n EI = 15), nivolumab monotherapy (n SD = 30 and n EI = 18), and adjuvant durvalumab (n SD = 17 and n EI = 49). a One patient had two occurrences of the same toxicity on pembrolizumab monotherapy (hepatitis). b One patient had two occurrences of same toxicity on nivolumab monotherapy (colitis). c Therapy was interrupted in one patient receiving chemotherapy-ICI combination (hepatitis and endocrinopathy). d Therapy was interrupted in two patients treated with pembrolizumab monotherapy, patient 1: pneumonitis and endocrinopathy; patient 2: two occurrences of skin toxicity. e Patient with two different toxicities: fatigue leading to dose reduction and skin toxicity leading to treatment interruption. Discussion In this retrospective single-center cohort study, we compared safety and efficacy of EI dosing for ICI mono-or consolidation treatment during the COVID-19 pandemic with data of SD schedules in patients with stage III and IV NSCLC. Low grade AEs were observed more frequently only in the pembrolizumab EI cohort compared to the SD cohort. After dose escalation, however, we did not observe an increase in clinically relevant toxicity leading to treatment interruption and/or discontinuation compared to the SD cohort. Efficacy of pembrolizumab and durvalumab were comparable between both groups, whereas a better survival of EI dosing for nivolumab was suggested by our data. The apparent increased Clinical Lung Cancer March 2022 survival of this cohort, however, can be explained by a shift of patients to other centers in the Netherlands throughout the years 2019-2021. As a consequence, mainly long-term responders were escalated during the COVID-19 pandemic in our center, skewing PFS and OS in favor of the EI dosing cohort. The selection bias and the reporting bias of especially low-grade AEs are the biggest limitations of this retrospective analysis. In addition we could not include enough patients to properly power the explorative PFS and OS analysis. Until now, only limited data about EI dosing for ICI in NSCLC is available. One small observational study in 32 NSCLC patients receiving either SD or EI dose of durvalumab reported comparable rates of ICI related AEs and survival in both dose cohorts. 4 Data of a randomized controlled trial assessing nivolumab or pembrolizumab EI dosing in locally advanced or metastatic cancers is expected in 2025 (ClinicalTrials.gov NCT04295863). EI dosing for nivolumab in metastatic melanoma, on the contrary, is common practice. 5 To our knowledge this is the first comprehensive analysis of safety and efficacy of EI dosing for pembrolizumab mono-or consolidation therapy, nivolumab monotherapy and adjuvant durvalumab in stage III and stage IV NSCLC patients. Based on our data these schedules seem a safe and effective strategy not only to decrease the number of visits during and after the COVID-19 pandemic. Clinical practice points • Until now, only limited data about extended interval (EI) dosing for ICI in NSCLC is available. One small observational study in 32 NSCLC patients receiving either standard dose or EI dose of durvalumab reports comparable rates of ICI related AE and survival in both dose cohorts. • Our study in 117 patients shows extended interval dosing did not lead to an increase in clinically relevant toxicity, no increase of patients needing dose reduction and no increased frequency of treatment discontinuations. • Efficacy of pembrolizumab monotherapy and adjuvant durvalumab were comparable in both groups. Based on our study these schedules seem a safe and effective strategy. Disclosure The authors have stated that they have no conflicts of interest.
2021-12-25T14:06:56.622Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "3c64b2cb99118145a683daa269ad1080c3b6837d", "oa_license": "CCBY", "oa_url": "https://pure.rug.nl/ws/files/205664922/1_s2.0_S1525730421003053_main.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4a58243364aa838c1f761f3a55cc74c3624b2d3b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253125795
pes2o/s2orc
v3-fos-license
Research trends and hotspots in the relationship between outdoor activities and myopia: A bibliometric analysis based on the web of science database from 2006 to 2021 Objectives This study aimed to explore the current status, hotspots, and emerging research trends regarding the relationship between outdoor activities and myopia. Methods Publications on the relationship between outdoor activities and myopia from 2006 to 2021 were collected from the Web of Science Core Collection database. CiteSpace (version 6.1.R2) was used to performed a bibliometric analysis, and R software (version 4.1.0) was used to visualize the trends and hot map of publications. Results A total of 640 publications were collected and analyzed in the present study. China was the major contributor (n = 204), followed by the United States of America (n = 181) and Australia (n = 137). The United States of America had the most extensive foreign cooperation (centrality = 0.25), followed by Australia (centrality = 0.20). The National University of Singapore contributed the largest number of publications (n = 48), followed by Sun Yat-Sen University (n = 41) and the Australian National University (n = 41). Among institutions, Cardiff University in the United Kingdom had the most extensive foreign cooperation (centrality = 0.12), followed by the National University of Singapore (centrality = 0.11). Saw S from Singapore had the largest number of publications (n = 39), followed by Morgan I from Australia (n = 27) and Jonas J from Germany (n = 23). Investigative ophthalmology & visual science is the most important journal to study the relationship between outdoor activities and myopia. “Global Prevalence of Myopia and High Myopia and Temporal Trends from 2000 through 2050” published by Holden BA was the most cited paper in this field with 177 citations. Co-occurrence and burst analyses of keywords showed that research trends and hotspots in this field focused mainly on “risk,” “prevention” and “school”. Conclusions The influence of outdoor activities on myopia remains a concern. In the future, deeper cooperation between countries or institutions is required to explore the effects of outdoor activities on myopia. Outdoor activities for the prevention of myopia and reduction of the risk of myopia among school students may be the focus of future research. Introduction Myopia, a major public health concern, has become the leading cause of visual impairment worldwide. The global prevalence of myopia is 28.3%, and the prevalence of myopia in Asia is significantly higher than that in other regions (1). Approximately 80-90% of high school students in China suffer from myopia, with high myopia being the most common sub-type at 10-20% (2). Studies have predicted that by the year 2050, nearly half of the world's population would have contracted myopia, of which nearly one-tenth will be high myopia (3,4). Myopia, especially high myopia, may result in a variety of complications, including cataracts, retinal detachment, glaucoma, macular holes, and even blindness (5)(6)(7). Lack of outdoor activities has been reported as the major risk factor for myopia in children and adolescents (8). Studies have reported that the prevalence of myopia in these age groups can be reduced by extending the duration of their outdoor activities (9,10). According to a meta-analysis, outdoor light exposure can reduce the incidence of myopia, slow its progression, and reduce axial elongation (11). The mechanisms by which outdoor activities affect the occurrence and development of myopia remain unclear. However, it has been postulated that factors such as light exposure, release of dopamine along with vitamin D, circadian rhythms, and near work could be possible explanations (12)(13)(14)(15). Bibliometrics is an important method to discover the law of development of discipline. At present, bibliometrics has been widely used in many fields, such as economics (16), environmental science (17), information management (18), social sciences (19) and biomedicine (20,21). The number of published research articles on myopia has increased worldwide. Some researchers have applied bibliometrics to the field of myopia in recent years, such as trends in research related to high myopia (22), myopia genetics (23), myopia management (24) and publications on myopia (25). To the best of our knowledge, no bibliometric analysis has discussed the relationship between outdoor activities and myopia. Thus, the present study was undertaken to explore the current status, hotspots, and emerging trends of research regarding the relationship between outdoor activities and myopia, so as to understand the current research status and future development trend in this field. Publications on the relationship between outdoor activities and myopia from 2006 to 2021 were collected from the Web of Science Core Collection database. CiteSpace (version 6.1.R2) was used to performed a bibliometric analysis, and R (version 4.1.0) was used to visualize the trends and hot map of publications. Literature resources We used the Web of Science Core Collection database, which contains more than 12,000 high-impact academic journals, for document retrieval (26,27). The literature search strategies were "(TS = (myopia and physical activity * )) OR (TS = (myopia and outdoor * )) OR (TS = (refractive error * and physical activity * )) OR (TS = (refractive error * and outdoor * ))." We included original research articles and reviews published in English language between January 1, 2006, and December 31, 2021. The literature search was completed on October 9, 2022. We excluded repetitive literature, meeting abstracts, news items, editorial materials, letters, books, early access and proceedings papers. Data extraction and analysis Two researchers independently conducted the literature search. Any controversial data obtained was discussed by the researchers, and a consensus was reached. CiteSpace software is a Java-based application program developed by Professor Chaomei Chen, which visualizes the interrelationship between literature based on co-citation networks (28). In this study, CiteSpace (version 6.1. R2) was used to map the scientific knowledge, and R software (version 4.1.0) was used to visualize the trends and hot map of published research articles. The countries or regions, institutions, journals, and authors of the literature were analyzed to discern the structure of knowledge in this field. Co-citation networks, keywords, and words with strong citation bursts were analyzed to recognize the hotspots, frontiers, and research trends. The . /fpubh. . network of scientific knowledge consists of nodes and links between nodes. In different networks of scientific knowledge, nodes represent keywords, authors, and so on. The size of the node represents the frequency of publication or citation. The color of the node represents the year, and a warmer color indicates a more recent year. Nodes with centrality greater than 0.1 are often seen as key points. The links between nodes represent co-citation or collaboration or cooccurrence. Q value and S value are provided in CiteSpace, which are evaluation indexes of mapping effect of the network. Q value is greater than 0.3 indicates that the community structure of the network of scientific knowledge is significant, and S value is greater than 0.5 indicates that the clustering of the network of scientific knowledge is reasonable (29). Analysis of countries/regions Sixty two countries/regions published papers related to this field ( Figure 2). The network diagram of papers published in these countries/regions was constructed using CiteSpace, and it comprised 62 nodes and 395 links ( Figure 3). The Q value and S value of the network was 0.6177 and 0.8583, respectively. China had the largest number of publications (n = 204), accounting for 31.9% of all publications. The United States of America (n = 181) and Australia (n = 137) ranked second and third, respectively. The United States of America had the highest centrality (centrality = 0.25) and extensively cooperated with other countries, followed by Australia (centrality = 0.20) and England (centrality = 0.15) ( Table 1). Analysis of the top 10 countries/regions with the maximum number of publications revealed that China was relatively late in conducting related research, and their number of publications did not increase until 2012 ( Figure 4). Analysis of institutions A total of 301 institutions published papers related to this field. The network diagram of these institutions published papers was constructed using CiteSpace, and it comprised 301 nodes and 1,178 links ( Figure 5). The Q value and S value of the network was 0.6177 and 0.8583, respectively. The diagram showed extensive cooperation among institutions. The National University of Singapore had the largest number of publications (n = 48), followed by Sun Yat-Sen University (n = 41) and the Australian National University (n = 41). Among the top 10 institutions with the majority of publications, three were from Singapore, two each from China and Australia, and one each from the United States of America, the United Kingdom and Germany (Table 2). Cardiff University in the United Kingdom had the most extensive foreign cooperation (centrality = 0.12), followed by the National University of Singapore (centrality = 0.11). Analysis of authors Four hundred and fourteen authors published papers related to this field. The network diagram of the authors' published papers was constructed using CiteSpace, and it consisted of 414 nodes and 1,540 links ( Figure 6). The Q value and S value of the network was 0.6177 and 0.8583, respectively. Saw S had the largest number of publications (n = 39). Morgan I (n = 27) and Jonas J (n = 23) ranked second and third, respectively. Analysis of keywords The 605 publications included in this study comprised 450 keywords, including "refractive error, " "prevalence, " "outdoor activity, " "risk factors, " "progression, " and "children". The keywords of the network graph were constructed using CiteSpace and consisted of 450 nodes and 1,808 links (Figure 7). We used CiteSpace to conduct a burst detection of keywords in the literature to explore the research frontiers in the relationship between outdoor activities and myopia (Figure 8). "Ocular refraction" and "follow-up" were the first keywords found, between 2007 and 2011 and between 2008 and 2013, respectively. "School, " "risk, " and "prevention" were the latest keywords found since 2019. Analysis of cited journals Ten journals were cited more than 300 times between 2006 and 2021. The journal co-citation network was constructed using CiteSpace, and it consisted of 668 nodes and 4,596 links ( Figure 9). The Q value and S value of the network was 0.4955 and 0.7582, respectively. As can be seen from the journal co-citation network analysis, the most influential articles on the relationship between outdoor activities and myopia were mainly published in Investigative ophthalmology & visual science, Ophthalmology, Optometry and Vision Science, British Journal of Ophthalmology and Ophthalmic and physiological optics (Table 3). American journal of ophthalmology had the biggest centrality (0.03), followed by Ophthalmology (Figure 10). The Q value and S value of the reference co-citation network was 0.4025 and 0.767, respectively. Analysis of cited references A cluster analysis of the cited references was performed ( Figure 10). The citation network of the references was divided into eight citation clusters (Q value = 0.6177, S value = 0.8583): #0 light exposure, #1 objective measure, #2 clinical practice, #3 outdoor activity, #4 refractive development, #5 current status, #6 current research, and #7 national myopia prevention. Discussion Myopia has become a global public health concern (30). Previous studies have shown that outdoor activity is one of the most important environmental factors for myopia (31), and increasing the time spent outdoors can reduce the risk of myopia (32, 33). Web of Science is a world-renowned retrieval tool for research publications and citations. Users can comprehend the research status and trends in a particular field by statistical analysis of the literature included in this tool. Bibliometrics is a method of indexing and analyzing research directions and hotspots. Conducting bibliometrics through the Web of Science Core Collection database has become popular (34). CiteSpace is the mainstream software used in bibliometric research, which can provide users with a scientific atlas of a certain research field (35). We used CiteSpace to analyze and outline research trends in the relationship between outdoor activities and myopia over the past 15 years. Studies have shown that myopia can be influenced by innate factors and acquired factors. Epidemiological investigations have shown that myopia is related to outdoor activities (36). The randomized controlled experiment conducted by He M from Sun Yat-Sen University shows that the outdoor exercise group can significantly reduce the prevalence of myopia among adolescents by adding 40 min of outdoor activity per week in school compared with the control group (30). Wu PC from Kaohsiung Chang Gung Memorial Hospital conducted an intervention study among Grade 1 students from 16 schools and found that the students from outdoor exercise intervention group had significantly reduced axial elongation . FIGURE Authors of studies on the relationship between outdoor activities and myopia from to . FIGURE Keywords of studies on the relationship between outdoor activities and myopia from to . Top keywords with the strongest citation bursts. and the risk of myopia compared with the control group (9). The number of publications was divided into two time periods in the present study, with the year 2010 set as the boundary. Only a few related studies were published before 2010, with the number of publications and citations being less than 10 and 100, respectively. This showed that the relationship between outdoor activities and myopia did not attract widespread attention in the past. However, an opposite trend was observed after 2010, with the number of research articles analyzing the association between outdoor activities and myopia gradually increasing. Saw S from National University of Singapore had the largest number of publications. Investigative ophthalmology & visual science is the most important journal, followed by Ophthalmology and Optometry and Vision Science. China, the United States of America, Australia, and Singapore were the primary countries to conduct research on the association between outdoor activities and myopia in recent years. China was the only developing country among the top 10 countries with the largest number of publications. Nevertheless, it contributed the largest number of publications in this field. The number of publications in China has increased rapidly, especially after 2012, but research cooperation between China and other countries needs to be strengthened. It can be seen that the United States of America played a leading role in this field, followed by Australia. In recent years, the association between outdoor activities and myopia had attracted the attention of scholars all around the world. This trend could be attributed to the increasing prevalence of myopia worldwide. The prevalence of myopia in East Asian countries, such as Singapore and China, is significantly high (37), and the increasing prevalence of myopia in China has attracted considerable attention from researchers and the government (38). Keywords reflect the theme of an article. High-frequency keywords can be regarded as hot topics in related research fields. Frontiers in research can be recognized by detecting keywords with rapid growth in frequency (39). Analysis . /fpubh. . of burst keywords showed that students were the main focus of researchers, while the risk and preventive measures for myopia were the hotspots for future studies. These findings suggest that studies on the association between outdoor activities and myopia among students should be strengthened. It was observed that outdoor activities could reduce the risk of myopia. However, further studies are required to determine differences in their protective effects on children and adolescents based on region, gender, and ethnicity (30, 40-43). Additionally, although several studies have reported that outdoor activities can reduce the prevalence of myopia (44)(45)(46), there is insufficient evidence showing that outdoor activities can inhibit its progression (47). Therefore, further researches are needed to determine the pathophysiology of outdoor activities in inhibition of progression of myopia. We primarily used a survey questionnaire to determine the importance of outdoor activity duration. Likewise, . /fpubh. . few randomized clinical trials and longitudinal followup studies have explored the effects of outdoor activities on myopia. There is a lack of consensus regarding the intensity or duration of outdoor activities most suitable to prevent myopia. This was the first analysis using bibliometrics and visualization techniques to explore the impact of outdoor activities on myopia by scrutinizing published researches between 2006 and 2021. The Q value and S value of network diagram in the present study was greater than 0.3 and greater than 0.5, respectively, which indicates that the community structure of the network of scientific knowledge is significant and the clustering of the network of scientific knowledge is reasonable. Through this study, we identified the hotspots along with renowned research institutions and teams in this field. We also understood the ideas and directions of future research. However, the present study had some limitations. First, the language of included publications was limited to English, which resulted in a bias in literature selection. Second, the articles were retrieved using subject terms, and thus some relevant publications may have been omitted from this study. Third, CiteSpace was developed based on the data format of the Web of Science database, and data from the Web of Science database can be directly imported into CiteSpace for visual analysis (48). It is therefore that we only retrieved the literature in the Web of Science database as the analysis object in this study, and did not include other publications not included in Web of Science database. In fact, CiteSpace provides format conversion for literature from other databases such as Scopus, so the literature from other databases can also be imported into CiteSpace for analysis after format conversion (48). Fourth, CiteSpace cannot analyze and evaluate the quality of publications. Although this study has some limitations, it still reveals the trends and hotspots of future research in this field. In conclusion, we analyzed literature on the relationship between outdoor activities and myopia, using the Web of Science database. The impact of outdoor activities on myopia has attracted the attention of researchers, resulting in an annual increase in the number of publications. China was the main contributor of studies in this field. The breadth and depth of cooperation between countries or institutions must be strengthened to analyze this relationship. The National University of Singapore, Saw S and Investigative ophthalmology & visual science were the most influential institution, author and journal, respectively. Moreover, additional evidence is required to explore the effects of outdoor activities on myopia. Likewise, more research is needed to improve our understanding of the association between outdoor activities and myopia and reduce the risk of myopia . /fpubh. . among school students. In general, researchers can benefit from the bibliometric analysis in this study, because they can understand the knowledge structure, hotspots and frontiers of this field. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
2022-10-27T15:10:45.556Z
2022-10-26T00:00:00.000
{ "year": 2022, "sha1": "e5b2470dd2ed02edac60142ace08164ec9df5fcb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e5b2470dd2ed02edac60142ace08164ec9df5fcb", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
11435823
pes2o/s2orc
v3-fos-license
Criteria for short QT interval based on a new QT-heart rate adjustment formula Background A short QT interval, within which an increased risk for atrial fibrillation and/or fatal cardiac arrhythmias occurs, has been difficult to define. Methods The lower percentiles of a new QTc formula were determined, using a precise mathematical fitting of the QT-heart rate relationship from the ECGs of 13,600 individuals from the NHANES II and III surveys. Results The QTc interval for persons in the lower fifth percentile, second (2.5th) percentile and first percentile, were calculated. Conclusions Based on the new spline formula, a short QTc is defined at the first percentile, and is less than 380 ms in both men and women. Introduction A short QT interval on the electrocardiogram identifies individuals with a high risk for the development of atrial fibrillation and/or fatal cardiac arrhythmias [1,2]. However, questions about the prevalence of this condition have been difficult to answer [3][4][5][6][7][8][9][10]. The difficulties with identifying this condition are severalfold including determining the diagnostic criteria and selecting the appropriate QT heart rate adjustment formula. Heart rate is well-recognized to affect the QT interval, necessitating the application of a heart rate correction formula (QTc). The Bazett formula is used by most studies and guidelines to define the short QTc [4][5][6][9][10][11]. Unfortunately, this formula is only satisfactory when applied in cases involving a heart rate of around 60 bpm; it becomes progressively less accurate at faster and slower heart rates [12]. Recently, a new QT-heart rate correction formula was developed based on the ECGs from about 13,600 individuals in the United States' National Health and Nutrition Examination Survey (NHANES) population study, and was shown both to be relatively independent of heart rate and also superior to other formulae [13]. However, it was used to evaluate long QT intervals [13], the other part of the QT spectrum, short QT interval, was not considered. Thus, the purpose of this current study was to define the QT interval limits that constitute a short QT interval. Material and methods The methodology to construct the new QTc has been presented in detail [13]. Briefly, a spline correction function, modeled using a cubic regression spline with four knots and an adjustment for gender, was fit to the QT and heart rate ECG data of 13,600 individuals involved in the US NHANES II and III studies, conducted by the US Centers for Disease Control and Prevention (CDC) [13]. Considering the persons' ages, the spline QT correction, was developed, with each observation weighted by the respective NHANES sampling weight with spline parameters selected as those that minimized the least squares estimate fit of the QT-heart rate relationship [13]. The ECG exclusion criteria were ECG abnormalities that made the calculation of the QT interval difficult, such as left or right bundle branch block; or the presence of left ventricular hypertrophy, myocardial infarction or if rhythm was not in sinus. The heart rate and QT interval measurements were made by a computerized ECG analysis algorithm, which eliminated intra-observer variability [13]. Results The QTc duration in the fifth percentile is relatively stable across all ages, until the age of 85 years, when there is a rise in men and an apparent reduction in women (Fig. 1). These changes in the older age groups of both genders are most likely due to the presence of a smaller part of the population at the older age being in the lower percentile of the QT-heart rate correction. The mean QTc for the fifth percentile was 391.2 ms for men and 391.5 ms for women. The QTc duration in the 2.5th percentile was similar, being relatively stable across all ages until the age of 85 years, when there is also a rise in men and an apparent reduction in women. The mean QTc for the 2.5th percentile was 385.8 ms for men and 386.1 ms for women. The QTc duration in the first percentile is relatively stable across all ages until the age of 75 years. The greater variability observed at the older age group is most likely due to the smaller number of the population at the older ages. The mean QTc for the first percentile was 379.6 ms for men and 370.3 ms for women. Discussion This study is the first to define the short QT interval based on a new QTc formula that is relatively independent of the effect of heart rate on QT, and does so from a large population base using statistically defined criteria. Previous suggestions for a criterion for short QT interval have varied between different recommendations. A QTc of 390 ms and shorter has been proposed by an American Heart Association committee [14]. This would be consistent with the fifth percentile of the spline QTc. A QTcr 340 ms has also been proposed, based on data from cases with a short QTc that in addition also had a personal and/or familial history of cardiac arrest [15]. The proposal that short QT syndrome can be diagnosed in the presence of a QTc o360 ms and one or more of the following-a pathogenic mutation, a family history of short QT syndrome, a family history of sudden death at age r40, and/or survival following a ventricular tachycardia/fibrillation episode in the absence of heart disease [8] -constitutes a multivariate definition of which QTc is only one criteria. Values of 360 or smaller would be considerably less than the first percentile using the spline QT correction formula. In the absence of the other criteria, a short QT of less than or equal to 330 msec has been proposed [8]. The Seattle criteria for the ECG evaluation of athletes suggested a criteria for short QT at an interval of r320 ms [16]. Other studies have considered a QT r300 ms as a short QTc [5,7,10,11]. Recognizing the percentile distributions may explain, at least in part, why four studies with a total of 266,035 persons did not identify a single case with a short QTc [5,7,9,11]. Short QTc has been calculated from the Bazett formula in most of the studies available [4][5][6][9][10][11]. However, it should be noted that the QTc based on the Bazett formula is known to undercorrect the QT interval at fast heart rates and overcorrect at slow heart rates, as compared with other correction formulae [12]. Thus, utilizing the Bazett formula for QT-heart rate correction has the potential for error, and therefore may obscure the identification of the short QTc syndrome. In contrast, the new formula effectively eliminated differences due to heart rate, with only some small random variability [13]. It important to point out that the new QTc formula incorporated both an age and gender correction. This correction effectively eliminated differences due to age and gender, so the percentiles are approximately equal, with some small random variability, in men and women [13]. Conclusion These data presented an objective criterion to identify individuals with a short QT interval by setting the first percentile as the criterion to begin the clinical search for reversible factors that might shorten the QT interval, or identify individuals with an inherited abnormality that predisposes them to arrhythmias. An applet to compute the spline QTc from user input is available at https://elenaszefer.shinyapps.io/qtc_nhanes_spline. The applet is easy to use, so that QT intervals that are of concern to the clinician can be readily entered. The QTc will be calculated along with the percentile rank of the value. The use of percentile distribution also provides a framework to evaluate the literature on other proposed criteria for short QT syndrome. Grant funding The author declares no conflict of interest related to this study. There are no conflicts of interest or any relationship with industry and financial associations that might pose a conflict of interest.
2018-04-03T01:01:45.002Z
2017-05-06T00:00:00.000
{ "year": 2017, "sha1": "49d0a539e01ea54386b4f7492d21ca99a3c3ef2d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.joa.2017.04.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fc7657fed48ab22ba8798067bdc939e01826204f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119322554
pes2o/s2orc
v3-fos-license
Long term regularity of the one-fluid Euler-Maxwell system in 3D with vorticity We prove long-term regularity of solutions of the one-fluid Euler-Maxwell system in 3 spatial dimensions, in the case of small initial data with nontrivial vorticity. Introduction A plasma is a collection of fast-moving charged particles and is one of the four fundamental states of matter. Plasmas are the most common phase of ordinary matter in the universe, both by mass and by volume. Essentially, all of the visible light from space comes from stars, which are plasmas with a temperature such that they radiate strongly at visible wavelengths. Most of the ordinary (or baryonic) matter in the universe, however, is found in the intergalactic medium, which is also a plasma, but much hotter, so that it radiates primarily as X-rays. We refer to [3,7] for physics references in book form. One of the basic models for describing plasma dynamics is the Euler-Maxwell "two-fluid" model, in which two compressible ion and electron fluids interact with their own self-consistent electromagnetic field. In this paper we consider a slightly simplified version, the so-called onefluid Euler-Maxwell system (EM) for electrons, which accounts for the interaction of electrons and the electromagnetic field, but neglects the dynamics of the ion fluid. The model describes the dynamical evolution of the functions n e : R 3 → R (the density of the fluid), v e : R 3 → R 3 The first author was supported in part by NSF grant DMS-1265818. The second author was supported in part by NSF grant DMS-1500958. There are several physical constants in the above system: −e < 0 is the electron's charge, m e is the electron's mass, c denotes the speed of light, and P e is related to the effective electron temperature (that is k B T e = n 0 P e , where k B is the Boltzmann constant). In the system above we have chosen, for simplicity, the quadratic adiabatic pressure law p e = P e n 2 e /2. The system has a family of equilibrium solutions (n e , v e , E ′ , B ′ ) = (n 0 , 0, 0, 0), where n 0 > 0 is a constant. Our goal here is to investigate the long-term stability properties of these solutions. 1.1. The main theorem. The system (1.1)-(1.2) is a complicated coupled nonlinear system of ten scalar evolution equations and two constraints. To simplify it, we make first linear changes of variables to normalize the constants. More precisely, let The system (1.1)-(1.2) becomes          ∂ t n + div((1 + n)v) = 0, and div(B) = 0, div(E) + n = 0. (1.4) The system depends only on the parameter d in the second equation. In the physically relevant case we have d ∈ (0, 1), which we assume from now on. We now define the vorticity of our system (allowed to be nontrivial) as We note that the system (1.3) admits a conserved energy, defined by E conserved := R 3 d|n| 2 + (1 + n)|v| 2 + |E| 2 + |B| 2 dx. (1.6) To state our main theorem we need to introduce some notation. (ii) One can derive more information about the solution (n, v, E, B) of the system. For example, the solution satisfies the uniform bounds, for all t ∈ [0, T δ 0 ], (n(t), v(t), E(t), B(t)) H N 0 ǫ, where Y (t) = B(t) − ∇ × v(t). Moreover, the solution decouples into a superposition of two dispersive components U e and U b which propagate with different group velocities and decay, and a vorticity component Y , which is essentially transported by the flow. The two dispersive components can be studied precisely using the Z-norm, see Definition 2.1. 1.2. Previous work on long-term regularity. The local regularity theory of the Euler-Maxwell system follows easily by energy estimates. The question of long-term regularity is much more interesting and has been studied in several recent papers. The dynamics of the full Euler-Maxwell system is extremely complex, due to a large number of coupled interactions and many types of resonances. Even at the linear level, there are ionacoustic waves, Langmuir waves, light waves etc. At the nonlinear level, the Euler-Maxwell system is the origin of many well-known dispersive PDE's which can be derived via scaling and asymptotic expansions. See also the introduction of [18] for a longer discussion of the Euler-Maxwell system in 3D, and its connections to many other models in mathematical physics, such as the Euler-Poisson model, the Zakharov system, the KdV, the KP, and the NLS. Because of this complexity it is natural to study first simplified models, such as the onefluid Euler-Poisson model (first studied by Guo [17]) and the one-fluid Euler-Maxwell system (which is the system (1.1)). In particular, the one-fluid Euler-Maxwell system shares many of the features and the conceptual difficulties of the full system, but is simpler at the analytical level. Under suitable irrotationality assumptions, this system can be reduced to a coupled system of two Klein-Gordon equations with different speeds and no null structure. While global results are classical in the case of scalar wave and Klein-Gordon equations, see for example [24,25,27,28,29,30,5,33,35,8,9,1,2], it was pointed out by Germain [13] that there are key new difficulties in the case of a coupled system of Klein-Gordon equations with different speeds. In this case, the classical vector-field method does not seem to work well, and there are large sets of resonances that contribute in the analysis. Global regularity for small irrotational solutions of this model was proved by Germain-Masmoudi [14] and Ionescu-Pausader [23], using more subtle arguments based on Fourier analysis. In 3 dimensions, nontrivial global solutions of the full two-fluid system were constructed for the first time by Guo-Ionescu-Pausader [18] (small irrotational perturbations of constant solutions), following the earlier partial results in simplified models in [17,20,14,23]. The one-fluid Euler-Poisson system and the one-fluid Euler-Maxwell system have also been studied in 2 dimensions, where the global results are harder due to less dispersion and slower decay. See [22], [31], and [11]. 1.2.1. Nontrivial vorticity. We remark that all the global regularity results described above are restricted to the case of solutions with trivial vorticity. This is also the case with the global regularity results in many other quasilinear fluid models, such as water waves, see the introduction of [12] for a longer discussion. In fact, all proofs of global existence in quasilinear evolutions depend in a crucial way on establishing quantitative decay of solutions over time. On the other hand, one usually expects that vorticity is transported by the flow and does not decay. This simple fact causes a serious obstruction to proving global existence for solutions with dynamically nontrivial vorticity. In this paper we would like to initiate the study of long-term regularity of solutions with nontrivial vorticity. However, we are not able to establish the global existence of such solutions for any of the Euler-Maxwell or Euler-Poisson systems. Instead we prove that sufficiently small solutions extend smoothly on a time of existence that depends only on the size of the vorticity. Such a theorem can be interpreted as a quantitative version of global regularity theorems for small solutions with trivial vorticity described earlier. In fact, our Theorem 1.2 immediately implies the global regularity theorems of [14] and [23], simply by letting δ 0 → 0. An important consideration to keep in mind is the length of the time of existence of solutions. In our case we show that this time of existence is at least c/δ 0 , where δ 0 is the size of the vorticity component of the initial data, and c is a small constant. This is consistent with the time of existence of the simple equation One can think of this equation as a model for the vorticity equation, in dimension 3, which ignores all the other interactions and the precise structure of the vorticity equation. The c/δ 0 time of existence appears to be quite robust, and one can hope to prove a theorem like Theorem 1.2 in other models in which global regularity for solutions with trivial vorticity is known. One might also hope that more involved analysis would allow one to extend solutions beyond the c/δ 0 time of existence, particularly in certain models in dimension 2 when the vorticity equation is known to behave better than the simple equation (1.15). We hope to return to such issues in the future. 1.3. Main ideas of the proof. The classical mechanism to establish long-term regularity for quasilinear equations has two main components: (1) Control of high frequencies (high order Sobolev norms); (2) Dispersion/decay of the solution over time. The interplay of these two aspects has been present since the seminal work of Klainerman [27]- [30], Christodoulou [5], and Shatah [33]. In the last few years new methods have emerged in the study of global solutions of quasilinear evolutions, inspired by the advances in semilinear theory. The basic idea is to combine the classical energy and vector-fields methods with refined analysis of the Duhamel formula, using the Fourier transform. This is the essence of the "method of space-time resonances" of Germain-Masmoudi-Shatah [15,16], see also Gustafson-Nakanishi-Tsai [21], and of the refinements in [22,23,18,19,11,10,12], using atomic decompositions and sophisticated norms. This general framework needs to be adapted to our case, where we have non-decaying components and we are aiming for a lifespan that depends only on the size of these components. To illustrate the main ideas, consider the following schematic system (1.16) Here one should think of U as generic dispersive variables (take for instance the Klein-Gordon case Λ = √ 1 − ∆) and Y represent generic non-dispersive vorticity-type components. The are to be thought of as generic quadratic nonlinearities that may lose derivatives. See (2.6) for the precise system in our case, keeping in mind that there are two types of dispersive variables corresponding to two different speeds of propagation. Our analysis of solutions of such a system contains three main ingredients: • Energy estimates for the full system. These estimates allow us to control high Sobolev norms and weighted norms (corresponding to the rotation vector-field) of the solution. They are not hard in our case, since we are able to prove independently L 1 t pointwise control of the solution. • Vorticity energy estimates. This is a new ingredient in our problem. We need to show that the vorticity stays small, that is δ 0 , on the entire time of existence. These estimates depend again on the L 1 t pointwise control of the solution and on the structure of the nonlinearity of the vorticity equation (without a O(U 2 ) term). • Dispersive analysis. The dispersive estimates, which lead to decay, rely on a bootstrap argument in a suitable Z norm. The norm we use here is similar to the Z norm introduced in the 2D problem in [11] and accounts for the rotation invariance of the system. We analyze carefully the Duhamel formula for the first equation in (1.16), in particular the quadratic interactions related to the set of resonances. The analysis of the terms O(Y 2 ) and O(Y U ), which contain the transport × transport → dispersive and the transport × dispersive → dispersive interactions, is new, when compared to the irrotational global results described earlier such as [23]. On the other hand, the analysis of the term O(U 2 ), which involves a large set of space-time resonances, due to the two different speeds of propagation, has similarities with the analysis in [22,23,18,19]. At the implementation level, we remark that we are able to completely decouple the decay parameter β, which can be taken very small, see Definition 2.1, from the smoothness parameters N 0 and N 1 . These parameters were related to each other in earlier work, such as [22,23,18,19]. As a result, we are able to reduce substantially the total number of derivatives N 0 and N 1 in the main theorem. 1 1.4. Organization. The rest of the paper is organized as follows: in section 2 we introduce most of the key definitions, such as the Z norm, rewrite our main system as a dispersive system for the quasilinear variables (diagonalized at the linear level), and state the main bootstrap proposition. In section 3 we summarize some lemmas that are being used in the rest of the paper, mostly concerning linear analysis and the resonant structure of the oscillatory phases. In section 4 we prove our main energy estimates, both for the full energy of the system and for the vorticity energy. Finally, in sections 5-7 we prove our main dispersive estimates for the decaying components of the solution. Preliminaries In this section we rewrite our main system as a quasilinear dispersive system (diagonalized at the linear level), summarize the main definitions, and state the main bootstrap proposition. By taking divergences and curls, the system (1.3) gives the evolution equations Let U e := Λ e Z + iF, Λ e := 1 + d|∇| 2 , The formulas above show that Conversely, the physical variables n, v, E, B can be recovered from the dispersive variables U e , U b , Y by the formulas, see (2.2), The formulas show that the sets of variables (n, v, E, B, Y ) and (U e , U b , Y ) are elliptically equivalent, for example, for any m ≥ 1 For any a < b ∈ Z and j ∈ [a, b] ∩ Z let (2.9) For any x ∈ R let x + := max(x, 0), x − := min(x, 0). Let For any (k, j) ∈ J let and notice that, for any k ∈ Z fixed, For any interval I ⊆ R let Let P k , k ∈ Z, denote the operator on R 3 defined by the Fourier multiplier ξ → ϕ k (ξ). Similarly, for any I ⊆ R let P I denote the operator on R 3 defined by the Fourier multiplier ξ → ϕ I (ξ). For any (k, j) ∈ J let Q jk denote the operator (2.10) where, with β := 10 −6 , Notice that, when σ = e we have the simpler formula, The operators A σ n,(j) are relevant only when σ = b and j ≫ 1, to localize to thin neighborhoods of the space-time resonant sets. The small factors 2 −4βn in (2.19), which are connected to the operators A b n,(j) , are important only in the space-time resonant analysis, in the proof of the bound (7.27) in Lemma 7.7. The main bootstrap proposition. Our main result is the following proposition: and for some sufficiently large constant C. Then, for any t ∈ [0, T ], The constant C can be fixed sufficiently large, depending only on d, and the constant ǫ is small relative to 1/C. Given Proposition 2.2, Theorem 1.2 follows using a local existence result and a continuity argument. See [23, Sections 2 and 3] (in particular Proposition 2.2 and Proposition 2.4) for similar arguments. The rest of this paper is concerned with the proof of Proposition 2.2. This proposition follows from Proposition 4.1, Proposition 4.2, and Proposition 5.1. Some lemmas In this section we collect several lemmas that are used in the rest of the paper. We fix a sufficiently large constant D ≥ 10D 0 . 3.0.1. Integration by parts. We start with two lemmas that are used often in integration by parts arguments. See [23,Lemma 5.4] and [11,Lemma ] for the proofs. We will need another result about integration by parts using the rotation vector-fields Ω j . The lemma below (which is used only in the proof of the more technical Lemma 7.7) follows from Lemma 3.8 in [11]. and Pr 1 : A similar bound holds for the integrals I 2 p and I 3 p obtained by replacing the vector-field Ω 1 with the vector-fields Ω 2 and Ω 3 respectively, and replacing the cutoff function ψ 1 with cutoff functions ψ 2 and ψ 3 respectively (defined as in (3.4), but with the projection Pr 1 replaced by the projections if (1 + β/20)ν ≥ −m, then the same bounds hold when I j p , j ∈ {1, 2, 3}, are replaced by the integrals (notice the additional localization in modulation factor ϕ ν (Φ(ξ, η))) 3.0.2. Linear and bilinear operators. To bound bilinear operators, we often use the following simple lemma. for any exponents p 1 , p 2 , p 3 ∈ [1, ∞] satisfying 1/p 1 + 1/p 2 + 1/p 3 = 1. As a consequence Our next lemma, which is also used to bound bilinear operators, shows that localization with respect to the phase is often a bounded operation. See [11,Lemma 3.10] for the proof. Assume that 1/2 = 1/q + 1/r, χ is a Schwartz function, and where the constant in the inequality only depends on the function χ. The nonlinearities in the dispersive system (2.6) and the elliptic changes of variables (2.1) and (2.7) involve the Riesz transform. It is useful to note that our main spaces are stable with respect to the action of singular integrals. More precisely, for integers n ≥ 1 let denote classes of symbols satisfying differential inequalities of the Hörmander-Michlin type. 2 Notice that this is a slightly larger class of phases than those defined in section 2, i.e. it includes the contributions of the vorticity variables (corresponding to µ = 0 or ν = 0). The phase functions. We collect now several properties of the phase functions Φ = Φ σµν . In this subsection we assume that σ, µ, ν ∈ {e, b, −e, −b} (so µ = 0, ν = 0). We start with a suitable description of the geometry of resonant sets. See [11, Proposition 8.2 and Remark 8.4] for proofs; the arguments provided in [11] are in two dimensions, but they extend with no difficulty to three dimensions. We start with a general upper bound on the size of sublevel sets of functions. Moreover, if n = l = 1, K is a union of at most A intervals, and |Y ′ (x)| ≥ L on K, then As a consequence, we have precise bounds on the sublevel sets of our phase functions: 3.0.4. Linear Estimates. We prove now several linear estimates. Given a function f , (k, j) ∈ J , and n ∈ {0, . . . , j + 1} (recall the notation in subsection 2.2) we define Notice that f j,k,n is nontrivial only if n = 0 or (n ≥ 1, σ = b, and 2 k ≈ 1). Moreover, As a consequence, for any k ∈ Z one has (ii) Assume σ ∈ {e, b}, N ≥ 10, and Then, for any (k, j) ∈ J and n ∈ {0, . . . , j + 1}, The hypothesis gives Using the definition, On the other hand, if m ≥ 10 then the usual dispersion estimate gives The bound (3.22) follows. The bound (3.23) follows also, by summation over j and n. (ii) The hypothesis (3.24) shows that f j,k,n H N The first inequality in (3.25) follows from the interpolation inequality and the Sobolev embedding (along the spheres S 2 ) sup The second inequality follows similarly. To prove (3.26), for θ ∈ S 2 fixed we estimate using the localization of the function Q j,k f in the physical space. The desired bounds (3.26) follow from (3.25). The bounds in (3.27) follow as well, if we notice that derivatives in ξ corresponds to multiplication by 2 j factors, due to space localization. (iii) We may assume f H 2 = 1. Using Sobolev embedding in the spheres, as in (3.30), The desired estimate follows in the same way as the bound (3.26). Energy estimates In this section we prove our main energy estimates. In the rest of the paper we often use the standard Einstein convention that repeated indices are summed. We work in the physical space and divide the proofs into two parts: a high order estimate for the full system (the H N 0 norm in (2.25)), and a weighted estimate only for the vorticity components (the estimate (2.26)). 4.1. The total energy of the system. In this subsection we prove the following: Proof. Recall the real-valued variables F, G, Z, W defined in (2.1), 3 It is important to write the system in terms of these variables, not the more physical variables n, v, E, B, in order to be able to prove energy estimates that include the rotation vector-fields. Step 1. For m ∈ [0, N 0 ] ∩ Z we define the energy functionals E m : [0, T ] → R, Notice that the case m = 0 is similar (but not identical, because of the different cubic correction) to the conserved physical energy in (1.6). Notice also that, for any t ∈ [0, T ], In particular, there is a constant C 1 ≥ 1 such that, for any t ∈ [0, T ], We would like to estimate now the energy increment. For L ∈ V N 0 let E L denote the term in (4.4) corresponding to the differential operator L. We calculate, using (4.3), where N F and N G denote the nonlinearities corresponding to the equations for F and G in (4.3). Since L and |∇| commute, all the quadratic terms in the expression above cancel, so n)LG · LN G − 2nLG · LW + 2LZ · L(R · (nv)) + 2LW · L(R × (nv)) dx. (4.6) Step 2. We would like to show that, for any t ∈ [0, T ], All the terms in (4.6) are at least cubic, but we also need to avoid potential loss of derivatives. Some of the terms in (4.6) can be estimated easily, using the definitions (4.2), i.e. since these terms do not lose derivatives. For the remaining terms, we extract first the components that could lose derivatives. Clearly Using the general bound at the expense of acceptable errors. For (4.7) it remains to prove that where We also have, using integration by parts Combining the remaining terms in E ′′ L and recalling that n = −|∇|Z and ∂ j v j = |∇|F , it remains to show that This follows using again the bound (4.8) and the identity −|∇| = R j ∂ j . The desired bound (4.7) follows. 4.2. Control of the vorticity energy. In this subsection we prove the following: Proof. We define vorticity energy functionals Notice that there is a constant C 2 ≥ 1 such that, for any t ∈ [0, T ], (4.14) To prove the proposition we need to estimate the increment of the vorticity energy. More precisely, we would like to show that Indeed, assuming this, we could estimate, for any t ∈ [0, T ], where we have used the assumptions (2.22) and T ≤ ǫ/δ 0 . The desired conclusion (4.12) follows, provided that C 2 ≪ C ≪ ǫ −1/10 . To prove (4.15), using the last equation in (2.6) we calculate Since div(Y ) = 0 we calculate . Therefore, after integration by parts to remove the potential derivative loss coming from the term v l ∂ l Y j , we see that |∂ t E Y L | is bounded by a sum of integrals of the form where a + b ≤ N 1 , L a 1 ∈ V a , L b 2 ∈ V b , Q 1 , Q 2 are operators defined by S 10 symbols as in Lemma 3.5, and σ ∈ {e, b}. In view of (3.10), and using the bound for any t ∈ [0, T ] and L ′ ∈ V N 1 (see (2.24) and (4.14)), the integral in (4.16) is dominated by The desired bound (4.15) follows once we notice that, using (3.23) This is bounded by Cǫ(1 + t) −1−β , in view of the bootstrap assumption (2.25). The desired conclusion (4.15) follows, which completes the proof of the proposition. Improved control of the Z-norm, I: setup and preliminary estimates In the next three sections we prove the following bootstrap estimate for the Z-norm. We define V σ (t) = e itΛσ U σ (t), σ ∈ {e, b}, as before. Also, for simplicity of notation, let see (2.7), our system (5.2) can be written in the form for σ ∈ {e, b, 0}. Here P ′ := {e, b, −e, −b, 0} and the nonlinearities are defined by for suitable multipliers m σµν which are sums of functions of the form m(ξ)m ′ (ξ − η)m ′′ (η). In terms of the functions V σ , the Duhamel formula is, in the Fourier space, In integral form this gives, for σ ∈ {e, b} and t ∈ [0, T ], A rotation vector-field Ω ∈ {Ω 1 , Ω 2 , Ω 3 } acts on the Duhamel formula according to We iterate this formula. It follows that for any L ∈ V N 1 and α we have where here we set with |L| designating the order of the differential operator L, and In integral form this becomes We summarize below some of the properties of the functions f β,L θ and ∂ t f β,L θ : Proposition 5.2. (i) The multipliers m L σµν , L ∈ V N 1 , are sums of functions of the form (1 + |ξ| 2 ) 1/2 q(ξ)q ′ (ξ − η)q ′′ (η), q S n + q ′ S n + q ′′ S n n 1, (5.12) for any n ≥ 1, see (3.9) for the definition of the symbol spaces S n . The main reduction. We return now to the proof of Proposition 5.1. We have in view of Definition 2.1. We use the integral formula (5.11) and decompose the time integral into dyadic pieces. More precisely, given t ∈ [0, T ], we fix a suitable decomposition of the function 1 [0,t] , i.e. we fix functions q 0 , . . . , q L+1 : R → [0, 1], |L − log 2 (2 + t)| ≤ 2, with the properties For Proposition 5.1 it suffices to prove the following: With the hypothesis in Proposition 2.2 and the notation above, we have Here o := 10 −8 is a small constant. We prove this proposition in the next two sections. We remove first the contribution of very low and very high input frequencies. Then we consider the interactions containing one of the vorticity variables, in which either µ = 0 or ν = 0 (by symmetry we may assume that ν = 0). Finally, in section 7 we consider the purely dispersive interactions, i.e. µ, ν ∈ {e, b, −e, −b}. We will often need to localize the phase, in order to be able to integrate by parts in time. For this we define the operators I σµν l,s , I σµν ≤l,s , and I σµν l,s , l ∈ Z, by We start with a lemma that applies for all µ, ν ∈ P ′ . Lemma 6.1. (Very large or very small input frequencies) We have Proof. We estimate, using Definition 2.1, Lemma 3.3, (5.13), and (5.19), if k 1 ≤ k 2 . The bound (6.1) follows by summation over (k 1 , k 2 ) ∈ X k with k 2 ≥ k 1 , k 2 ≥ j/41 + βm. For the second bound we estimate if k 1 ≤ k 2 . The bound (6.2) follows. On the other hand, if both these inequalities hold then we estimate the L ∞ norm of the dispersive term using (3.23), , which suffices to complete the proof of the lemma. Proof. Let k := max(k + 1 , k + 2 ) and define f µ j 1 ,k 1 and f 0 j 2 ,k 2 as in (3.19). We consider three cases: (6.6) Then we estimate, using (5.13)-(5.14) and the last inequality in (3.22), The desired conclusion follows for the sum over the pairs (j 1 , j 2 ) with either |j 1 − m| ≥ m/100 or j 2 ≥ m/100. It remains to consider the pairs (j 1 , j 2 ) with |j 1 − m| ≤ m/100 and |j 2 | ≤ m/100. We estimate the L 2 norm in the expression above using Schur's test. Moreover using (5.14) for the last estimate. Applying now Lemma 3.9 and (6.9) we get Using Definition 2.1 and (5.14), Therefore, by Schur's lemma and recalling that l 0 = −m/10, Notice that this suffices to control the contribution of the pairs (j 1 , j 2 ) as in (6.8). On the other hand, if (1 − β)|m − j 1 | + j 2 ≤ 8k + βm, (6.15) then we decompose dyadically in modulation. The contribution of low modulations |Φ σµν | ≤ 2 l 0 can be estimated using Schur's lemma. As in the proof of (6.10), we can estimate Notice that this suffices to control the contribution of the pairs (j 1 , j 2 ) as in (6.15) if On the other hand, for l ≥ l 0 we integrate by parts in time and estimate, as in (6.11), where in the last line we used Lemma 3.4, the bounds (5.16) and (3.23), and the bound which is obtained by interpolation from the last two bounds in (5.15). Therefore recalling that −l 0 ≤ 9k + 3βm, see (6.15) and (6.17). The desired conclusion follows from (6.14), (6.16), and (6.18). This completes the proof of the lemma. 7. Improved control of the Z-norm, III: dispersive interactions In this section we prove Proposition 5.3 when µ, ν ∈ {e, b, −e, −b}. In view of Lemma 6.1 it suffices to prove that where the pair (k 1 , k 2 ) is fixed and satisfies The proof we present here is similar to the proof in [11,Sections 6,7]. It is simpler, however, because we work here in 3 dimensions, as opposed to 2 dimensions, and this leads to more favorable dispersion and decay properties of the solutions. For the sake of completeness we provide all the details in the rest of this section. As in the previous section, we drop the superscripts σµν and consider several cases. In many estimates below we use the basic bounds on the functions f µ = f α 1 ,L 1 and, for any k ∈ Z, see Proposition 5.2, where (γ, L, α) ∈ {(µ, L 1 , α 1 ), (ν, L 2 , α 2 )} and t = 1 + t. Recall also that |L 1 | + |L 2 | ≤ N 1 and |α 1 | + |α 2 | ≤ 4. We will often use the integration by parts formula (5.26). We divide the proof into several lemmas, depending on the relative size of the main parameters. As before, we start with the simpler cases and gradually reduce to the main resonant cases in Proposition 7.5. Proof. We define f µ j 1 ,k 1 and f ν j 2 ,k 2 as in (3.19). As in the proof of Lemma 6.2, integration by parts in ξ together with the change of variables η → ξ − η show that the contribution is negligible unless min(j 1 , j 2 ) ≥ j(1 − β/10). Without loss of generality we may assume that k 1 ≤ k 2 . For any j 1 , j 2 , we can estimate Indeed, this follows by an L 2 ×L ∞ estimate, using (7.3), the first bound in (3.22), and Definition 2.1 (we decompose in n and place the function with the larger n in L ∞ in order to gain the favorable factor 2 −n/2+4βn in (3.22)). The desired conclusion follows unless Assume now that (7.6) holds. In particular, k 2 ≥ D and |k − k 2 | ≤ 4. We further decompose our operator in modulation. As in Lemma 6.5, with l 0 := −14k − 20βm we estimate We estimate the L 2 norm in the expression above using Schur's test. Using Lemma 3.9, it follows that On the other hand, for l ≥ l 0 + 1 we integrate by parts in time. Using (5.26) we bound where in the last term we estimated ∂ s f ν j 2 ,k 2 (s) L 2 ǭ2 −m−50βm 2 −30k + 2 (interpolation between the last two bounds in (7.4)). Therefore, for j 1 , j 2 as in (7.6) and l 0 = −14k − 20βm, The desired conclusion follows from (7.7) and (7.8). Lemma 7.2. The bound (7.1) holds provided that (7.2) holds and, in addition, Proof. Clearly k ≤ 2D, thus |k + 1 − k + 2 | ≤ 3D. We define f µ j 1 ,k 1 and f ν j 2 ,k 2 as before and estimate Indeed, this follows by estimating the term with the smaller j in L ∞ and using the last bound in (3.22), and the term with the larger j in L 2 and using the Definition 2.1. The desired conclusion follows unless Assume now that (7.9) holds. In particular k ≤ −D. We consider first the high modulations, l ≥ l 0 + 1, where l 0 := −2k + 1 − D. Using (5.26) and (7.3)-(7.4) we estimate Deduce now that and since 2 3k/2 2 j(1+β) 2 k(1/2−β) 2 −m/6−βj this takes care of the large modulation case. We can now estimate the contribution of large modulations. We consider two cases: Case 1. Assume first that In this case we do not lose derivatives. Assuming, without loss of generality, that j 1 ≤ j 2 we estimate first where we used Lemma 3.4 and the second estimate in (3.22). This suffices to bound the contribution of the components with j 1 ≤ m − 20βm and j 2 ≥ m − βm − 3k. Since χ is rapidly decreasing we have ϕ k · N R 2 L ∞ 2 −4m , which gives an acceptable contribution. On the other hand, in the support of the integral defining N R 1 , we have that |s + λ| ≈ 2 m and integration by parts in η (using Lemma 3.1) gives ϕ k · N R 1 L ∞ 2 −4m . Therefore the contribution of N R can be estimated as claimed in (7.33). The desired conclusion (7.46) follows, which completes the proof of the lemma.
2016-11-11T15:57:41.000Z
2016-11-11T00:00:00.000
{ "year": 2016, "sha1": "5e62db57b995a3ef743759319db7111be5b0e76d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1611.03756", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5e62db57b995a3ef743759319db7111be5b0e76d", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
249907255
pes2o/s2orc
v3-fos-license
Damage Occurrence in Welded Structures of the Bucket-Wheel Boom Causes of damage occurrence in vital components of welded structures of the bucket-wheel excavator boom (DU1) at the coal landfill of the thermal power plant 'Nikola Tesla A' in Obrenovac (Serbia) are investigated. Bucket-wheel excavator was produced by French company 'Ameco' and it moves along the circular track. Taking into account lack of technical documentation, all tests and were carried out under the assumption that welded structures were made of structural steels S355 and S235. Investigation of causes of damage occurrence are based on results of non-destructive tests (NDT) and tensometric measurements. Introduction There are 2 bucket-wheel excavators at the coal landfill of the thermal power plant 'Nikola Tesla A' in Obrenovac (Serbia), designated by DU1 and DU2, and produced in France. These bucket-wheel excavators move along the circular track (widely known as polar track). Taking into account the long period of operation under severe working conditions (dynamic loading with varying amplitudes), as well as the fact that during their design there were practically no possibilities to carry out the detailed stress -strain analysis, the most loaded elements and their connections have to be checked continuously [1][2][3][4][5][6][7][8][9][10][11][12]. This especially refers to welded joints and welded structures [3,4,7,8]. Bucket-wheel excavator with designation DU1 is presented in figure 1. Considering the fact that there was no technical documentation, all tests and researches presented in this paper were carried out under the assumption that welded structures were made of structural steels S355 and S235 [13]. Damages at Vital Sections of the Welded Lattice Structure of the Bucket-Wheel Boom Damage was detected mainly through visual testing (VT) of parent material and welded joints. No defects were detected with other non-destructive testing methods, as well as no deviations from expected results during hardness testing. In figure 2 the reinforcements in the damaged support structure of the cylinder, which role is to enable the moving of the bucket-wheel boom, are shown, while in figure 3 the reinforcements that were embedded in the damaged section of one of the vital girders of the structure are shown. Sections of the vital structure in the upper zone of the bucket-wheel boom with damage in the area of welded joints are shown in figure 4. a) View from the right side to the bucket-wheel b) View from the left side to the bucket-wheel Stress state of Vital Structures of Bucket-Wheel Boom Based on Measured Local Strains Stresses were determined on the basis of tensometric measurements of local strains in the areas of parent material and welded joints at the lattice structure of the bucket-wheel boom of the excavator with designation DU1. Measurements were executed through the use of electro-resistant extensometers -measurement gauges TML PL-10 and TML PL-20. Measurement equipment for detection and processing of signals from measurement gauges to readable strain values is shown in figure 5. Table 3. Stresses at the vital structure of bucket-wheel boom (cross-section 11, Fig. 10 Analysis of Causes of Damage in Vital Components of the Bucket-Wheel Boom Based on the analysis of results of non-destructive tests executed at vital structures of the bucketwheel boom, it can be concluded that initial cracks within welded joints can propagate until they reach the critical length, which confirms the assumption that damages at vital structures occurred due to inadequate welding technologies during the manufacture of the bucket-wheel excavator and/or during previous repairs performed on parent material and welded joints, figures 2-4. Significant presence of defects in the area of welded joints, figure 4, is caused by complex geometry, figure 11. In accordance with recommendations [14], it was adopted that critical value of fatigue safety is σDwj = 45 MPa. Tensile strength of the weld metal is determined by the following expressions [15]: [13]. It is obvious that in both representative combinations of static stresses (figure 12), measured stresses in critical areas of vital components of the bucket-wheel (tables 1-3) lie beneath the limit line which connects fatigue strength σDwj and tensile strength σUTSmin of weld metal, which proves that damages at vital components of welded lattice structure of the bucket-wheel boom occurred due to the application of inadequate welding technologies during the manufacture of the bucket-wheel excavator and/or during the previous repairs. One should notice that stresses, calculated here on the basis measured strains, are in good agreement with the numerical results, obtaing by the FEM, as shown in [16]. Conclusion Results of tensometric measurements showed that welded structure of the bucket-wheel boom was designed in complete agreement with its function and operational loads with low service stresses which could not produce damage shown in this investigation. Therefore, it is clear that damage at vital components of welded lattice structure of the bucket-wheel boom occurred due to the application of inadequate welding processes and/or procedures during the manufacture and/or during the previous repairs.
2022-06-22T15:13:11.689Z
2022-06-20T00:00:00.000
{ "year": 2022, "sha1": "e32ba8cf791a97866ccf71a9a3a37b983455894d", "oa_license": "CCBY", "oa_url": "https://www.scientific.net/EI.2.41.pdf", "oa_status": "HYBRID", "pdf_src": "ScientificNet", "pdf_hash": "b01586a8e41061827ff7b40c971f0e9458dadcc2", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
118785567
pes2o/s2orc
v3-fos-license
Logarithmic temperature profiles in the ultimate regime of thermal convection We report on the theory of logarithmic temperature profiles in very strongly developed thermal convection in the geometry of a Rayleigh-Benard cell with aspect ratio one and discuss the degree of agreement with the recently measured profiles in the ultimate state of very large Rayleigh number flow. The parameters of the log-profile are calculated and compared with the measure ones. Their physical interpretation as well as their dependence on the radial position are discussed. I. INTRODUCTION The ultimate state of thermal convection in a Rayleigh-Bénard cell is generally believed to be fully turbulent, both in the bulk and in the boundary layers and with respect to both the velocity as well as to the temperature fields. As shown in ref. [1], this implies very characteristic scaling behavior of the heat transport flux, Nu ∝ Ra 0.38 as well as of the strength of the large scale flow, Re ∝ Ra 0.50 . Here the Rayleigh number Ra, the Nusselt number Nu, and the Reynolds number Re are defined as usual, see e.g. [2,3]. In a series of recent experiments, cf. [4][5][6][7][8], these scaling laws of Nu and Re have been measured and reported. In the previous work [1] we have calculated the global scaling exponents in the ultimate state. This ultimate state theory, though derived by employing the characteristic profiles of fully developed turbulent flows, concentrated on the implications for and the interpretation of the various global scaling exponents. We did not explicitly describe and report the local profiles themselves. Meanwhile the local thermal profiles in the very large Ra-regime have been measured, cf. [9]. We thus present here the corresponding local profiles, in the spirit of and in extension of our earlier theory from ref. [1]. II. THERMAL LOG-PROFILES OF FLOW ALONG PLATES We start from the equations of motion for the thermal field T ( x, t). The involved velocity field is understood to solve the corresponding Navier-Stokes equation for an incompressible flow, ∇ · u = 0, i.e., we assume that the temperature behaves like a passive scalar. The molecular properties kinematic viscosity ν and thermal diffusivity κ are taken as temperature independent fluid parameters. It is The first term vanishes, since only the x-derivative can contribute but Θ depends on z only. The last term for the same reason reads κ∂ 2 z Θ(z). The second term has to be modeled; we use the mixing length type Ansatz for an eddy thermal diffusivity. First, u ′ i θ ′ t as a long time mean depends on z only, is approximated by employing the temperature gradient hypothesis, The eddy diffusivity κ turb (z) is supposed to be a property of the flow, not of the fluid. The physical flow properties on which κ turb (z) can depend, are the distance z from the plate and the characteristic velocity scale u * of the turbulent fluctuations, defined by the wall stress νU x|z (z = 0) ≡ u 2 * , the shear rate or drag on the wall. From dimensional reasons we write Here the factorκ θ is the dimensionless thermal von Kármán constant, whose empirical value depends on the flow type and for Rayleigh-Bénard flow is not known. The equation for the temperature profile then is ∂ z [0+κ turb (z)∂ z Θ(z)+κ∂ z Θ(z)]. Integrate from z = 0 to some arbitrary z and use κ turb (z = 0) = 0 to find ( difference between the bottom and the top plates, which causes the thermal flow. J is z-independent. This leads to the thermal profile equation If J is positive, i. e., the heat flows upwards, the thermal gradient is negative, Θ(z) decreases, as is characteristic for the bottom plate. With the choice Θ(z = 0) = 0 at the bottom plate From this equation we draw conclusions for the bottom part profile. In the immediate vicinity of the plate it is κ ≫ κ turb , giving rise to the "linear thermal sublayer" of the profile, Θ(z) = −J · (z/κ), 0 ≤ z z * ,κ . This linear thermal sublayer extends until z = z * ,κ for which κ=κ θ u * z * ,κ , i.e., if the molecular and the eddy thermal diffusivity are of the same size. The relation to the kinetic sublayer width z * is Beyond the linear thermal sublayer the profile is increasingly dominated by the eddy diffusivity, κ turb ≫ κ, implying ∂ z Θ(z) = −J/(κ θ u * z) and thus is the thermal profile in the turbulent BL. The integration constant f may depend, of course, on κ and ν, but only in the form ν/κ, since f has unit 1. Thus f = f (P r), which is reported in [10] to empirically be about 1.5 for air, i. e., for P r ≈ 0.7. To make use of this formula for insight into the thermal log-profile we need the strength of the velocity fluctuations u * as a function of Re(Ra). This has been derived in [1]. But before determining u * we make now contact to the experimentally measured profile, cf. [9]. The experimental log-profile is parametrized in the form Here T m = (T b + T t )/2 = T b − ∆/2 is the mean temperature in the Rayleigh-Bénard cell. Comparing eqs.(7) and (8) leads to We now can identify the dimensionless empirical parameters A and B. and One easily verifies that the dimensions of A and B are indeed 1. Besides trivial parameters and an yet undetermined empirical parameter, the thermal von Kármán constantκ θ , two physical quantities determine the measured constants A and B. This is first the strength of the heat flux J = Nu · κ∆L −1 , describing the amplitude of the log-profile, and second the strength of the turbulent velocity fluctuations u * . While Nu and its scaling behavior in the ultimate range is experimentally (and theoretically) rather well known, the velocity amplitude u * has been calculated in [1]. We make use of those results now. A. Parameter A We start with the discussion of the amplitude A of the log-profile. From eq.(10) the parameter A apparently can be written as with Re ≡ UL/ν. Here we have introduced the wind amplitude U and the corresponding Reynolds number Re. For not yet too large Ra this amplitude U usually is visualized as a large scale circulation (LSC) in the RB sample. For very large Ra on the other hand, which are considered here, such LSC will probably not survive under the strong turbulent fluctuations. But its remnants locally in space and time still must have physical importance, since there apparently is enough shear in the plates' boundary layers to induce transition to turbulence, as it can be observed experimentally in the scaling exponents of the Nusselt number Nu and the Reynolds number Re versus Ra as well as the measured characteristic changes of the scaling exponents indicating this transition, see [4][5][6][7][8] and also [9]. To quantify this we use results from [12]. In the ultimate range Ra 10 15 the effective Reynolds number was found to be Re = 0.0439×Ra 0.50 , resulting in Re(Ra = 10 15 ) = 1.39× 10 6 . Extrapolating Re from smaller Ra, known as the classical range of RB flow, in [12] it is found Re = 0.407Ra 0.423 , the scaling exponent being well consistent with the GL theory [13]. According to this classical range formula the wind amplitude would be measurably smaller at Ra = 10 15 , namely Re = 0.901 × 10 6 . In the cited RB experiment it is Nu = 5631 and P r = 0.859 at Ra = 1.075 × 10 15 . From this we can calculate the coherence length in the turbulent bulk, which is defined as To quantify this we compare with the thickness z * of the linear viscous sublayer above the plate, which will be introduced and calculated later. It will be estimated as z * /L = 1.98 × 10 −5 for Ra = 10 15 . Then z * /ℓ coh = 0.106, meaning that even the smallest eddies of the bulk cascade are still 10 times larger than the kinetic viscous sublayer extension. That holds all the more for the energy carrying larger eddies up to the macroscocic scale L. Thus the bulk flow, even if strongly fluctuating, can provide sufficient shear to imply the turbulence transition at the plates (and most probably also at the side walls). Having clarified the meaning and physical importance of the wind amplitude U, we can determine the log-profile and its parameters A and B. In [1] we have derived the following two expressions for the Nusselt number Nu and the velocity fluctuation amplitude relative to the given asymptotic flow velocity, u * /U, in the logarithmic ultimate range, cf. eqs. (20) and (2) of that paper, namely The fluctuation amplitude u * solves the equation Here b is an empirical parameter of the velocity profile. It characterizes the position of the buffer range, in which the transition occurs from the linear viscous sublayer to the log-layer range in the velocity profile. Usually that profile -known as the law of the wall -is written Hereκ is the kinetic von Kármán constant, taken in the following asκ = 0.4. (14) we see that the smaller b is the smaller u * will be. Similarly u * will decrease with increasing Reynolds number Re and thus with growing Rayleigh number Ra. For Re(Ra) we meanwhile have experimental information, see [12]: In the ultimate state it is Re ef f = 0.0439Ra 0.50 . We thus have typical Re-numbers of order 10 6 or more in the ultimate range of thermal convection with Ra ∼ 10 15 . The velocity profile parameter b also enters the P r-dependence of the temperature profile via the definitionf (P r) = f (P r) + lnb/2. Using all these previous results we can express the amplitude parameter A in the form . Two remarks may be useful. In this expression for A the thermal von Kármán constantκ θ does not appear explicitly; instead the kinetic one,κ, shows up. This comes from eliminating Nu/κ θ in the defining equation (12) for A by using eq.(13). Second, the originally derived explicit Nu-dependence can be completely substituted and expressed in terms of the logcorrections originating from the log-profile of the velocity. The only reminder to the thermal profile is the thermal buffer range shiftf (P r), the thermal analog of the corresponding kinetic shift B u . Both A and B depend on Re as well as on u * /U, and thus on Ra. This implies the Ra-dependence of these measured fit-amplitudes A and B. Numerical values for Re and u * /U have been given in Table I Introducing now this Re-scaling of Re * into eq.(18) results in The amplitude A thus decreases with Ra, though very slowly. Over the next decade it will be smaller by approximately a factor of 0.906, i.e., its value at Ra is expected to be about 9.4 % less. This predicted slow decrease of A with Ra could be consistent with experimental observation [9]. We emphasize that α ′ decreases even further with increasing Re, i.e., asymptotially in Ra the amplitude A will approach a constant, Ra-independent limit A ∞ . One may wish to try coming closer to the experimental value for Indeed such deviation can be seen in experiment, see [9], Fig.1. -Another consequence of such deviation from the p! ure log-profile is that the intimate connection between the amplitudes A and B, to be discussed in the next chapter, will loose validity. B. Parameter B Let us now analyze the parameter B as derived in eq. (11), in particular the right hand part of this equation. Consider first the term ln L z * . With the above given definition z * = ν/u * one obtains L/z * = Lu * /ν and thus B = A ln Re u * U + f (P r) + 1/2 = A ln Re u * Irrespective of any approximate calculation of the coefficient A, the other coefficient B is always about 70% of A, in particular is also negative. Using this the T -profile can be written This is consistent with the underlying idea that T (z = L/2) = T m , which has been used when relating the T -profile, expressed in terms of the thermal current J or Nu, with the driving temperature difference ∆ = T b − T t , and which also is an immediate consequence of the Ansatz (8) for fitting the data. Let us come back to the consequences of the deviation of the (x-component of the) velocity profile from the classical case with asymptotically constant amplitude U. In an RB cell the U(z)-profile does not stay asymptotically constant with z but goes through a maximum, then decreases further and even changes sign at z/L ≈ 0.5. This different maximum-and-beyond behavior of the wind leads to deviations in the thermal profile from a log-law. Therefore the intimate relation between A and B, viz. B/A = ln2, typical for the log-profile, will no longer be valid. Also this statement is consistent with experimental observation, cf. [9]: the ratio B/A has quite some scatter, and although not too far way it apparently differs from ln2. To briefly summarize, the thermal profile is characterized by -starting from the plate -(i) a tiny linear thermal sublayer of extension an oder of magnitude less than the bulk coherence length, (ii) a buffer range in which the linear increase in the sublayer turns over into the (iii) log-law, observable over a broader z-range up to about a quarter of the RB cell height, then (iv) changing the profile again due to the decrease of the blowing wind including its directional change, from positive to negative (or vice versa), which may be denoted as the temperature's center profile. This latter one, the center part, still has to be explored in more detail. IV. POSITION DEPENDENCE OF PROFILE PARAMETERS A(r) AND B(r) The basic assumption of the just given derivation of the profile parameters A, B is a plane parallel homogeneous flow with veloctity U over an infinitely extended plate. In a Rayleigh-Bénard cell this -at best-is realized in the center-range of the circular cell, i. e., at r = 0. We now make a crude model for the shape of the large scale circulation (LSC). We have in mind the case of aspect ratio Γ = 1, but one can argue similarly for Γ = 1/2 (and other). Then, if one moves away from the center range at r = 0, the relevant flow velocity leading to the velocity shear in the BL and the perpendicular logarithmic profiles of velocity and temperature is the x-component of the LSC only. We therefore have to substitute in above formulas always U ⇒ U x = U x (r). If we consider for simplicity a circular LSC, we get U x = Ucosφ = U 1 − r 2 R 2 ; here U still denotes the LSC amplitude, which defines Re = UL/ν. Considering expression (18) will result in an r-dependent profile coefficient Expressed in terms of the (relative) wall distance ξ ≡ R−r R , thus 0 ≤ ξ ≤ 1 between the wall and the center, respectively, it is The coefficient A(ξ) decreases with increasing distance from the wall ∝ ξ −1/2 . This result is consistent with the experimental finding of an A-decrease towards the center. The data presented in [9] have been taken at the position (R − r)/L = 0.0045, corresponding to ξ = 0.009. This implies A( r R = 0.991) = A(ξ = 0.009) = 7.47A. We note that these last formulas cease to be valid in the limit ξ → 0 or r → R, because for sufficiently small U x (r) the BL locally is no longer turbulent, finally there is only upward flow and one is in the BL of the side wall. But note further that with the full, non-approximate expression (17) for the profile coefficient A the additional term in the denominator weakens the r-dependence, all the more the larger r becomes. Then even the limit r → R exists, leading to Some qualitative features of the r-dependence of A(r) are the following: (i) For small r the amplitude A(r) always increases as A(r) = A(1 + const( r R ) 2 ); there is no term linear in r because of analyticity reasons. (For a strictly circular flow it is const = 0.5). (ii) For r near the side wall, i. e., for small ξ = (R − r)/R, the amplitude A(ξ) ∼ ξ −n decreases with increasing distance from the wall with an exponent n = 1/2 for a circular LSC; for an elliptically shaped wind curve it will be steeper, thus n is larger. In the particular case of aspect ratio Γ = 1/2 there may be two circular rolls above each other, which again would lead to n = 1/2. In case there is only one single roll this will be elliptically shaped and thus n is larger. In case both LSC shapes are present part of the time, the exponent n will be somewhere in between, i. e., in any case one would find a steeper decrease of A with the ! wall distance as compared to the circular case for Γ = 1. This is consistent with the measurement of n ≈ 2/3 in the case Γ = 0.5 (Goettingen Uboot team, private communication). (iii) If Γ is a little below 1, n will be a bit larger than 1/2; if Γ is a little above 1, n is expected to be somewhat smaller than 1/2; etc. V. CONCLUDING REMARKS We have calculated the profile parameters A and B, defined in eq.(8), of the experimentally measured logarithmic temperature profiles in the ultimate state of Rayleigh-Bénard convection as realized for very large Ra. In the case of a pure thermal log-law as well as reflection symmetry with respect to the middle plane z = L/2 = R of the Γ = 1 cylindrical sample, which we expect and which should hold for approximately temperature independent material properties, the coefficient B equals A up to a factor ln 2. Since the real wind profile is different from the standard case of approachig a constant when going off the plate, in the RB cell instead going down towards the center, even followed by a directional inversion of the wind direction, one finds deviations from this value. The amplitude A physically measures the strength of the turbulent velocity fluctuation scale u * relative to the LSC velocity U (together with the von Kármán constantκ); if one is sufficiently near to the side wall, it also measures the Prandtl number dependent temperature profile shift constantf (P r). We explain the r-dependence of A(r) by the decreasing magnitude of the local flow velocity U x (r) parallel to the bottom plate with increasing distance r from the center, which by (18) or (17) leads to an increase of the A-amplitude with increasing r. The numerically obtained value for A for the case b = O(1) does not coincide too well with the measured one, if we use the values of the fluctuation scale u * calculated in [1]. When we published that work the meanwhile measured values for the sample's LSC-response Re = Re(Ra) (cf. [12]) was not yet known and assumed to be smaller; if our interpretation is correct it means, that the experimental u * /U values are smaller in an RB-sample, because U ∝ Re is larger than assumed previously and b is smaller. Thus there is very interesting information in the measured T -profile parameters A(r) and B(r). But, as usual in Rayleigh-Bénard flow, to complete, check and confirm theoretical interpretation and explanation will need information also about the velocity profile and its expected considerable deviations from the model of classical channel or pipe flow as is used here for a lack of better.
2012-08-13T14:24:34.000Z
2012-08-13T00:00:00.000
{ "year": 2012, "sha1": "5babd2007d23292e5c3807e36592f676b8fe63b0", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1208.2597", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5babd2007d23292e5c3807e36592f676b8fe63b0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
261208653
pes2o/s2orc
v3-fos-license
Transforming CS Curricula into EU-Standardized Micro-Credentials Micro-credentials are a way to integrate flexible learning pathways into the classic forms of education defined in the European Qualification Framework (EQF). They allow members of the workforce to get necessary skills described, certified, and recognized in a transparent and portable way. One way for universities to enter this market for lifelong learning is to convert existing study programs into smaller units, namely, micro-credentials. This process of converting a study program consisting of modules into small, independent pieces is called unbundling. When unbundling a program, the existing modules have to be converted to a standard EU-wide recognizable form. In this paper, we will describe the process we used to convert modules from our study program at DHBW. The first step converts the skill descriptions into a standard form. Since there is no common accepted formal standard, we use the Dublin descriptors as a way to structure the skills on the different abstraction levels, and ESCO-terms as a widely used standardized vocabulary. The second step breaks down modules of 3-12 ECTS into smaller constituents (each ECTS corresponds to a workload of around 30 hours). Typical micro-credentials have a size of 1 to 3 ECTS, a group or stack of micro-credentials corresponds to one module. Introduction The European skills agenda from 2020 names micro-credentials (MC) as an important tool for citizens to develop future skills demanded by employers. They serve the purpose of supporting life-long learning and international validity of certificates for distributed learning in time and space. To document this personal record, platforms like Europass 1 are developed and rolled out. To enable the recognition of courses including their assessments, it is essential to define outcomes, competences and skills in a standardized manner that everyone can interpret at an international level. In the EU, qualification frameworks define levels of achievement in the different Bologna cycles. Dublin Descriptors 2 define one such framework by describing levels of learning through "generic statements of typical expectations of achievements and abilities associated with qualifications that represent the end of each of a Bologna cycle". These levels can be used to structure the learning outcomes within module descriptions from basic facts, their application, or critical reflection and communication on the given topics. A significant component of the standardization of learning outcomes is a controlled vocabulary that defines the different skills. The ESCO classification provides such terms for some areas (ESCO) 3 . In addition to providing personalized learning paths, such a standardized structure forms the basis of learning data that can support comparative studies as well as best practices. Open educational resources can be indexed uniformly as a function of best practices. Any such comparative study is very difficult without these standards. In particular, these MCs pave the way for learning path definitions that will be multidisciplinary in future. One such example is "Data Science" that combines domain knowledge, technology, and mathematics. Such a degree is highly personalized and enabled through MCs. Using EU-standards for MCs supports unification of module descriptions on several levels: (1) Learning outcomes formulation (skills) sorted by level (Dublin Descriptors) (2) Teaching content formulation (knowledge) (3) groupings into smaller, modular, and stackable components, and finally (4) assessment and activities. Especially, transversal and interdisciplinary skills can be described for MCs with ESCO. A more detailed discussion of this aspect can be found in section 4. Currently, most universities do not follow common standards when describing competences and skills in module descriptions. "Traditionally higher education was relatively explicit about the knowledge (outcomes) to be achieved, or at least the knowledge covered by the curriculum. It was however somewhat less explicit on the skills or competences required for the award of a given qualification. Competences, such as those of critical evaluation, were and are embedded or implicit in the assessment values and practices." (Bologna Working Group 2005, p. 63) The same goes for ethics, security, or sustainability. Principally, module descriptions are difficult to standardize across modules and even more so across majors as their authors differ and are often untrained in this matter. The current process, therefore, frequently leads to inconsistencies. This problem is even more complicated than usual with DHBW (our university) because of size and history, from our website: "Baden-Wuerttemberg Cooperative State University (Duale Hochschule Baden-Württemberg/DHBW) is the first higher education institution in Germany which combines on-the-job training and academic studies and, therefore, achieves a close integration of theory and practice, both being components of cooperative education. With around 34,000 enrolled students, over 9,000 partner companies and more than 145,000 graduates, DHBW counts as one of the largest higher education institutions in the German Federal State of Baden-Wuerttemberg." 4 Being large and distributed over 10 campuses, running study programs in parallel, makes coordination difficult and slow. As a result, a change to our (many) module descriptions to adapt EU standards requires a prolonged change-process. In this paper, the authors present representative examples of learning outcomes from very simple to highly complex. We will show the process of converting from the existing more or less "free form" definition into a standardized form using Dublin Descriptors and ESCO-terms. The process is designed to be generalizable into a methodology for others in order to follow guiding steps during conversion of new modules. We will point out lessons learned and pitfalls to avoid along the way, and elaborate on the following steps: 1. Analysing learning outcomes in modules 2. Assigning Dublin Descriptors to learning outcomes 3. Associating standard formulations 4. Creating stackable sub-modules (micro-credentials) 5. Editing the online micro-credentials Micro-credentials Although there is no global consensus about the term MC the indication is always the same: MCs are usually short, flexible, and modular learning programs that can be stacked and completed in much less time than the traditional degree programs. In the last years there was a significant push towards the interest for these small units but as the relevance of MC increased, the lack of definitions and processes towards creating MC has become evident (Brown et al. 2021). The European Union started an approach to support lifelong learning and employability through short, flexible, and modular learning programs with their Council Recommendation in 2022 5 . This recommendation aims to establish a common understanding and recognition of MCs across the EU to reach their full potential. According to this resolution, the EU describes a MC as follows: "'Micro-credential' means the record of the learning outcomes that a learner has acquired following a small volume of learning. These learning outcomes will have been assessed against transparent and clearly defined criteria. Learning experiences leading to micro-credentials are designed to provide the learner with specific knowledge, skills and competences that respond to societal, personal, cultural or labour market needs. Micro-credentials are owned by the learner, can be shared and are portable. They may be stand-alone or combined into larger credentials. They are underpinned by quality assurance following agreed standards in the relevant sector or area of activity." (Council of the European Union 2022, p. 5) This definition already gives an idea of the enormous advantages offered by MCs. They are an attractive option for professional development as they allow individuals to acquire new skills quickly and individually. On the one hand, they help individuals to stay competitive in the job market by demonstrating their expertise in a particular area and on the other hand, employers can identify and recruit individuals with specific skills as well as provide a way to train and upskill current employees. So overall, MCs are a valuable tool for both individuals and organizations providing a flexible and accessible way to acquire and validate specific skills or competences (European Commission 2020). According to the definition of the EU, there are still two main issues that need to be solved: First, the "transparent and clearly defined criteria" for the learning outcomes to achieve a well-defined set of knowledge, skills and competences have to be defined. Additionally, there must be a recognized standard that validates this set to ensure the quality of those small units (Council of the European Union 2022). Since MCs are gaining popularity and the use continues to grow, it is important to quickly establish common standards and recognized frameworks that ensure the value and credibility of MCs in the education and employment sectors (UNESCO 2022). Dublin Descriptors The European MOOC Consortium (massive open online courses) collaborates on a Common Microcredential Framework (CMF) which aims to combine the learning outcomes in higher education and professional training. Those programs consist of 4 to 6 ECTS and can be certificated to fit into Europass. To assure the quality of the programs the ENQA Guidelines are used as a reference framework. The CMF uses the qualification levels taken from EFQ to be fully compatible with the qualifications under the Bologna Process (European MOOC Consortium 2019). While the EQF defines skill levels that allow comparison between qualification systems, its definition of skill levels is too abstract to be used to classify learning outcomes (European Commission 2008). This level of detail is possible using Dublin Descriptors, that are compatible with the EQF: "In the QF-EHEA [Dublin Descriptors being adopted in EHEA], learning outcomes are understood as descriptions of what a learner is expected to know, to understand and to do at the end of the respective cycle" (ibid, p. 10). That is precisely what is needed in a module description or a definition of the outcomes of a MC. "The Dublin descriptors refer to the following five dimensions: 'knowledge and understanding', 'applying knowledge and understanding', 'making judgements', 'communication' and 'learning skills'. Whereas the first three dimensions are mainly covered by the knowledge and skills dimensions in the EQF, the EQF does not explicitly refer to key competences such as communication, or meta-competences, such as learning to learn" (ibid). Those are the transversal skills also needed in defining learning outcomes of a module or a MC. The ESCO 6 terms provide a standard vocabulary for these skills. Table 1 provides an overview of the five dimensions of the Dublin Descriptors and associated examples for possible formulations. Of course, Dublin Descriptors are only one way to define skill levels. The well-known Blooms Taxonomy could also be used. So, Dublin Descriptors and Blooms Taxonomy are pretty much exchangeable and serve the purpose of sorting things nicely. It has no relationship with standardizing the formulation through ESCO, which is the more important standard that serves to unify the description on an EU (or international) basis. But since the Dublin Descriptors are accepted by the EU (see above) and provide an easily understandable layering with each level built on top of the next lower one, we use that. Converting the Module Description Step 1: Analysing Learning Outcomes in Modules The first step in the conversion analyzes the curriculum description, especially the learning outcomes, to apply the Dublin Descriptor framework. In our case this includes translating the text to (at least) the English language. The next step is to reduce complex sentences and enumerations into simple, single topic sentences like "Students can apply X", "They can implement Y" or "They can develop Z". Sometimes it was necessary to combine sentences, but only in very few cases. Learning outcomes on a higher abstraction level have to be phrased accordingly like "Students can … (analyze, choose, argue, reason about, improve, …)" (see Table 1). This process repeats for all skills of the module description and will lead to a list of statements sorted by Dublin Descriptor levels. Example Transformation of Learning Outcomes In the following sections we will show only selected examples from a first cycle module in Software Engineering. The complete module description, before and after conversion, can be found on github 7 . Let's start with the German original "Die Studierenden kennen die Grundlagen des Softwareerstellungsprozesses". In Step 1 we translate that to English and split it up (not necessary for this example) in simple sentences. We get "The students know the basics of the software development process." Step 2: Assigning Dublin Descriptors to Learning Outcomes In step 2 we associate a Dublin Descriptor level: When analyzing the verbs we find "The students know …" This corresponds to level 1. In the next step, we try to find a corresponding ESCO skill or a corresponding ISCED category. In this case we get an appropriate ESCO skill (knowledge): "ICT project management methodologies". The next example is a bit more complex: "Sie können eine vorgegebene Problemstellung analysieren und rechnergestützt Lösungen entwerfen, umsetzen, qualitätssichern und dokumentieren." Translated to English we get: "They are able to analyze a given problem and can use computer based tools to design, develop, assure quality and document solutions." Split this up in simple parts gives (step 1): "They can analyze a given problem. They can use computer based tools for communication and problem solving. They can design and develop a solution for a given problem. They can assure the quality of the solution. They can document solutions." Based on the vocabulary used, we order these learning outcomes by Dublin Descriptor levels (step 2): DD2: "They can design and develop a solution for a given problem.", "They can document solutions.", "They can use computer based tools for communication and problem solving." DD3: "They can analyze a given problem." Step 3: Associating Standard Formulations Now we have to associate standard terms for the skills and competences. We won't do that for all the skills given in the example but only for some edge cases, see Table 2. Table 2. Corresponding ESCO-Terms for Given Skills and Competences "They can analyze a given problem" S2.7.4 analyze business requirements "They can design and develop a solution for a given problem" S1.11.1 designing ICT systems or applications "They can use computer based tools for communication and problem solving" S5.6 using digital tools for collaboration, content creation and problem solving. But that doesn't work always: "Sie können korrigierende Anpassungen an Lösungsvorschlägen vornehmen". In English: "They can correct design decisions" doesn't exist in ESCO yet. In this case we need to propose a new term S4.9+ "correcting design decisions" as the ESCO criteria are incomplete and don't offer a corresponding term. Or: "Sie können für konkrete Problemstellungen angemessene Methoden auswählen." In English: "The students can choose appropriate methods to solve a given problem." That would correspond to level 2 or 3 in the Dublin Descriptors. In ESCO there are either broader or narrower terms, so we need a new skill "Software development lifecycle models" as a generic term for the subcategories under "ICT project management methodologies". To modify or add ESCO categories one has to differentiate between two types of changes: There are small and large changes. An amendment to the given text is a small change, that can be done quickly. Adding a new skill is a major change that has a larger delay in implementation as it goes through a central consortium and needs to be translated to several languages. Figure 1 shows the mapping of the German free text version to groups of skills structured according to the Dublin Descriptors, but still in the original competency framework. It would be nice to change that to a standard form too, but this is something which should be addressed in the long run since it requires changing the competence framework used for accreditation and probably will take years to be accepted. Trying to convert the topics covered in a course and defined in the module description is a bit different than skills. These document the knowledge gained during the course. Classifying knowledge on a high level is easier. The relevant knowledge of the example module (software engineering) is: On a high level, these topics are classified by the ISCED-F (International Standard Classification of Education) classification (ISCED 8 ), but the level of detail given there might not be sufficient. The relevant category for our example is ISCED-F/613 "Software and applications development and analysis" with the following subcategories (ISCED-F 613). 9 Computer programming, Computer science, Computer systems analysis, Computer systems design, Informatics (computer science), Operating systems, Programming (computer), Programming languages development, Software development, Software localisation, Software programming, Software testing This might not be as fine grained as needed for a module description. So, the broad topic of a course can be defined in standardized way using ISCED but probably has to be complemented with (non-standard) terms to make clear, what the course content really is. Depending on the topic, existing ESCO-terms can be used, in other areas they have to be defined. Whether this is acceptable for the recognition of MCs remains to be seen. The output of this rather mechanical process is given in Figure 1 in the form of our standard module descriptions. METHODENKOMPETENZ (DD2 + DD3) analyse business requirements organising, planning and scheduling work and activities designing ICT systems or applications correcting design decisions using digital tools for collaboration, content creation and problem solving PERSONALE UND SOZIALE KOMPETENZ (DD3 + DD4) analysing and evaluating ICT systems and solutions negotiating presenting information working with others building and developing teams ÜBERGREIFENDE HANDLUNGSKOMPETENZ (DD5) thinking skills and competences planning and organising thinking creatively and innovatively working efficiently taking a proactive approach accept criticism and guidance communicating supporting others collaborating in teams and networks Step 4: Creating Stackable Sub-Modules (Micro-Credentials) As it can be seen in Figure 2 modules might often have a huge size, so it is necessary to break up modules into smaller "micro" units. Figure 2. Example Module Separated into Three Stackable Micro-Credentials Since MCs are defined as small units, a typical size for existing MCs is 1 to 3 ECTS, so about 3 ECTS looks like an acceptable maximum size. In our programs most modules have a size of 5 ECTS. Our example (Software Engineering I) has a size of 9 ECTS which is too big for a MC. In this case it makes sense to divide the 9 ECTS into 3 parts: design, specification, and implementation. Each of them can stand alone and be taken since each unit has individual skills and competences defined. Of course, these three MCs can be stacked if desired. That makes sense for a first cycle computer science degree (which the module is intended and accredited for), but in other cases not. For someone working in business or health care, it might make perfect sense to acquire the skills necessary for analyzing and specifying the (business) requirements of a system while the skills to design or implement it are not needed, so they only need the first part. When splitting a module into several parts, a few interesting questions will arise, especially with transversal skills. While assigning the skill to one (and only one) of the three parts of our sample module is easy for some skills -typically more technical skills -it might become difficult for others. We must consider three edge cases: 1. skills, which are used equally in all parts. Example: "using digital tools for collaboration, content creation and problem solving" 2. skills, which are used to some extent in more than one part but with a clear focus in one part. Example: "Requirements management" focused in part two 3. skills, which are only acquired in one part. Example: "S2.7.4 analyze business requirements" only in part one or "Code quality, review, testing" only in part three. Case number 3 is obviously the easiest: the skill (knowledge in the example above) can be assigned to one of the parts. Case number 1 is easy from the viewpoint of assigning the skill: it must be assigned to all parts. But that is not without semantic problems: Consider the case of two students, one takes and completes only part one, the other one all three parts. Do both have the same set of skills afterwards? Probably student number two spent more time learning "using digital tools for collaboration, content creation and problem solving". But how do we know that, looking only at the certificate? Especially in the case, that she didn't take three MCs but completes the module as originally intended and got only one certificate. Should the skills be weighted with the size of the course? Probably not, since several skills could be acquired in a course, not all equally important/deep. So, we cannot distinguish the two. There is (as far as known to the authors) no standard way of handling this. Maybe we could assign points or badges or some other quantitative attribute to the skill for each MC. But that would be hard to get consistent across platforms/ universities. Maybe that only makes sense in one ecosystem to express things like: "To get the skill 'using digital tools for collaboration, content creation and problem solving' you need to take all three parts together or maybe take only one of these but then you need other MCs which give the missing amount of that skill." In the future, student administration systems in universities must be capable of handling not only lecture-names, grades and granting university. A student must have a set of MCs shown but in addition the skills must be extracted into a skillprofile. Out of such a profile, we can then determine, if a student is able to register for a new MC, without taking a specific pre-requisite. Instead, a pre-requisite is expressed by pre-required skillset. Today, moodle and most student management systems in universities are not enabled for this requirement of new learning. Step 5: Editing the Online Micro-Credentials The format of MC certificates is defined by the EU 10 as an XML-format. There are (web-) tools to create certificates manually, but in the long run, any XML editor can take given module descriptions and highlight problems with the texts and allow editors to change texts into sentences, looking for autocomplete that matches the current ESCO criteria and ISCED classification of knowledge. In our (DHBW) current project portfolio we have MicroCredX and EU4Dual, that work on such interfaces for MC design and future work will publish on how these ideas are implemented and integrated with our student management systems at DHBW. We plan to leverage open-source projects here and cooperate with other universities that have similar requirements. Teachers will have to adapt their way of grading by adopting more detailed skill descriptions and using a more granular grading system. Additionally, transversal skills must be made visible within the grading scheme. The online editor or digital credential issuer uses the MC format to provide an XCEL to enter grades for each of the students based on their ID, which consists of the EU-ID or an email. After uploading the grades, all students receive a notification and can share their credentials publicly. Conclusion and Further Research The more or less unstructured format of our module descriptions is not adequate to capture the complexity of mapping different competence frameworks (especially in more than one language). While the visual representation we used for our work 11 has proven to be a valuable tool for structuring sample module descriptions during the process, a much more powerful way of representing the content (the learning outcomes, skills and knowledge, achievements and all the other metadata) is needed. Graph databases can be used to represent complex structures and dependencies and have been applied to capture dependencies between modules in university contexts, see for example (Samaranayake 2022). So, it should be evaluated, if a graph database is the right way to solve these problems. MCs require reforms by universities with respect to their basic student management systems. These need to be extended to view students as life-long learners instead of full-time clients for a couple of years. A new student should be able to enter a university with all past certifications immediately accessible to the student management system. Based on this model, that includes a skill-profile, specific coursework should automatically be accredited by the current institution, outlining the remaining curricular options that are open to the student given their profile. Additionally, for dual education, skills gained during any practical phase should be taken into account. Finally, a match between employers and employees can now be based on skill-profile matching, revolutionizing future job market in a world that seeks their employees in a worldwide international market.
2023-08-27T15:16:46.116Z
2023-08-25T00:00:00.000
{ "year": 2023, "sha1": "753e64b1a5013564b92206437943fb431f021025", "oa_license": null, "oa_url": "https://doi.org/10.30958/ajte.10-3-2", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "39ccbb0eba900cf9fc18591cbb8107f6bb6949b8", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [] }
91074518
pes2o/s2orc
v3-fos-license
Soil penetration resistance mapping quality: effect of the number of subsamples . There is no consensus in the literature regarding how many subsamples are needed to perform accurate on-farm soil penetration resistance (SPR) mapping. Therefore, the objective of this study was to define the number of subsamples per sampling point needed to quantify the SPR. The experiment was performed in a 4.7 ha area and employed a 50 × 50 m grid system (18 sampling points). The SPR was evaluated using a digital penetrometer in two different years with 1, 2, 3, 4, 5, 6, 9, 12, and 15 subsamples per sampling point. The SPR maps produced with increasing numbers of subsamples were compared to the reference maps (15 subsamples) using the relative deviation coefficient and Pearson´s linear correlation. A reduction in the number of subsamples promoted an increase in the variability of the SPR data. Generally, the results from this study suggest the use of at least four subsamples per sampling point to achieve SPR maps with a coefficient of relative deviation less than 10% (30% maximum error per point around the mean) and significant correlation with the reference maps (15 subsamples). Introduction Soil penetration resistance (SPR) is a measurement utilized to quantify the mechanical impedance of the soil for plant root growth (Bengough, Mckenzie, Hallet, & Valentine, 2011).Therefore, SPR is considered one of the main parameters for the diagnosis of the levels of soil compaction and determination of the most restrictive soil layers for root growth (Girardello et al., 2014).This tool has been widely used by researchers and service providers because it is rapid and easily used in the field compared to other more conventional methods, such as soil bulk density (Molin, Dias, & Carbonera, 2012). The SPR is usually quantified using traditional sampling methods, which include the collection of various subsamples (e.g., 12-15) in a random manner along the field and are considered independent samples (Tavares-Filho & Ribon, 2008;Storck et al., 2016).As a result, a mean SPR value is obtained for and applied to the total sampled area.Since the introduction of precision agriculture in Brazil in the early 2000s, systematic sampling protocols to assess the SPR have been widely applied in commercial Acta Scientiarum. Agronomy, v. 40, e34989, 2018 fields.This method considers the spatial dependence among the sampling points and consequently the spatial variability of the SPR within the area (Molin et al., 2012).The objective of this methodological change is to identify the spatial variability (horizontal and vertical) of the SPR in the sampled area, which enables the creation of thematic maps to guide site-specific management of compacted subareas and soil layers in the field (Girardello et al., 2014). Measurements of SPR values are highly influenced by diverse intrinsic (e.g., soil moisture, texture and structure) and extrinsic (e.g., management system) soil factors.As consequence, high coefficients of variation are usually observed (Beutler et al., 2007;Storck et al., 2016).Therefore, to correctly determine the spatial variability of the SPR, it is crucial to establish an adequate density of sampling points per area and select the number of subsamples that best represents each sampling point.Regarding the sampling density, studies have demonstrated that ideal sampling grids are approximately 50 × 50 m (i.e., four samples per ha) (Cherubin, Santi, Basso, Eitelwein, & Vian, 2011) or 30 × 30 m (i.e., more than 10 samples per ha) (Debiasi, Franchini, Oliveira, & Machado 2012).However, no study has defined the sampling point and its possible influence on the mapping of this variable. In traditional sampling methodology, between 12 and 15 subsamples must be collected to compose a mean SPR value with maximum errors varying between 5 and 15% (Tavares Filho & Ribon, 2008;Molin et al., 2012).Georeferenced sampling normally uses many sampling points; therefore, a larger number of subsamples per sampling point increases the sampling cost and makes this method less attractive to farmers (Molin et al., 2012).There is no consensus in the literature concerning the number of subsamples used to collect the SPR data.Instead, reports vary with the use of one (Souza et al., 2006;Marasca et al., 2011), two (Debiasi et al., 2012), three (Tormena, Barbosa, Costa, & Gonçalves, 2002;Cherubin et al., 2011), five (Silva, Passos, & Beltrão, 2009) and ten subsamples (Secco, Reinert, Reichert, & Silva, 2009;Girardello et al., 2014) per sampling point.The use of an insufficient number of subsamples may result in inaccurate data collection, which generates recommendations for unnecessary interventions. In this study, we tested the hypothesis that an insufficient number of subsamples per sampling point affected the representative of the assessment and the accuracy of the generated SPR thematic maps.The aim of this study was to evaluate the impact of the number of subsamples per sampling point on the quality of the SPR mapping and to determine the number of subsamples necessary to generate thematic maps with adequate accuracy for on-farm precision agriculture in crop production systems based on no-till farming. Description of the study area This on-farm study was conducted in an agricultural area near Palmeira das Missões city in southern Brazil (latitude 28°72′62″ S and longitude 69°14′34″ W), with a mean altitude of 600 m.The relief of the area is smoothly undulating, and the soil presents a clay texture (636 g kg -1 of clay, 316 g kg -1 of silt and 48 g kg -1 of sand content) and is classified as Rhodic Acrudox according to Soil Taxonomy (Soil Survey Staff, 2014) and "Latossolo Vermelho distrófico" according to the Brazilian System of Soil Classification (Santos et al., 2013).The area has been cultivated under a no-tillage cropping system without machinery traffic control since 1997 (i.e., 15 years when the study was performed), including crop succession with wheat in the winter season and soybean or eventually corn in the summer season. Determination of the soil penetration resistance The study was performed in 2012 (year I) and reproduced in 2013 (year II).In both years, the data were collected in May after the soybean harvest.The 4.7 ha agricultural area was georeferenced and divided into a regular quadrangular sampling grid of 50 × 50 m to yield 18 sampling points (Figure 1).The SPR was determined down to a 0.30 m depth using a portable digital penetrometer (PenetroLOG ® model PLG 1020, Falker Automação, Porto Alegre, Rio Grande do Sul State, Brazil) with a cone diameter of 12.83 mm.The rod was inserted into the soil at a constant speed close to 20 mm s -1 .When the insertion speed surpassed 30 mm s -1 , the equipment registered an error and the measurement was remade.Fifteen subsamples were collected from each sampling point following the inter-row position of the previous crop within a radius of 3 m around the georeferenced point.The SPR evaluations were performed two days after a heavy rain.The whole procedure took only one working day in both years. The water content of the soil at the moment of SPR evaluation was determined using the gravimetric method (Embrapa, 1997) with disturbed soil samples Acta Scientiarum. Agronomy, v. 40, e34989, 2018 collected from the 0.00-0.15m layer at three points (points 2, 9 and 16) (Figure 1).The average soil moisture was 310 and 330 g kg -1 for years I and II, respectively. Mathematical and statistical analyses The datasets from each sampling point (18 points) and soil layer (i.e., 0.00-0.05,0.05-0.10,0.10-0.15,0.15-0.20,0.20-0.25 and 0.25-0.30m) composed of 15 subsamples were organized into a spreadsheet and subjected to outlier analysis.Any values that fell outside of the range of two standard deviations from the mean were considered outliers.Subsequently, the determination of the optimum number of subsamples was performed based on Equation 1 as proposed by Petersen and Calvin (1965): where n is the number of subsamples, t is the value from the distribution table for the function of the level of significance (α) and the degrees of freedom used to estimate the sample variance, S is the sample standard deviation of the mean (15 subsamples per sampling point) and D is the result of the SPR mean at each sampling point divided by the percentage variation enabled around the mean.The level of significance used was 0.05%, and the optimum number of subsamples for each sampling point was determined considering maximum errors of 10, 20 and 30% around the mean. The SPR data from each soil layer considering the different numbers of subsamples were subjected to a descriptive statistical analysis to obtain the positional means (minimum, mean and maximum) and dispersion (coefficients of variation-CV, %).The CV values were used to classify the variability of the data into low (CV < 12%), medium (CV= 12 to 62%) and high (CV > 62%) as proposed by Warrick and Nielsen (1980).The normality hypothesis was tested and confirmed using the W test (p ≥ 0.05) (Shapiro & Wilk, 1965), and thus no data transformation was necessary.The data analysis was completed using the statistical package SAS (SAS, 2010). Analysis of the soil penetration resistance mapping quality Two parameters were used to evaluate the effect of the number of subsamples on the accuracy of the thematic maps: Pearson's linear correlation coefficient (p ≤ 0.05) and the relative deviation coefficient (RDC, %) (Coelho et al., 2009).The mean SPR values from 15 subsamples per sampling point were considered as a reference (standard) for comparison with the other maps produced using different numbers of subsamples (i.e., 1, 2, 3, 4, 5, 6, 9, and 12). The RDC, which is expressed as an absolute value, shows the dissimilarity between the two maps as demonstrated by the differences between the interpolated points of each map.The RDC was determined using Equation 2, which was adapted from the equations applied by Coelho et al. (2009) and Cherubin et al. (2015). where n is the number of sampling points (18), SPRref is the soil penetration resistance value at point i (reference value obtained using 15 subsamples per sampling point), and SPRj is the soil penetration resistance value at point i determined using different numbers of subsamples (i.e., 1, 2, 3, 4, 5, 6, 9, and 12). Number of subsamples per sampling point After the preliminary analysis to detect outliers, 2.7% of the raw data was removed.This procedure is fundamental when the SPR is measured using portable penetrometers with manual operation because the roughness of the soil surface (Catania et al., 2013) and the variation in the speed of the rod going into the soil profile can influence the results (Valadão Jr, Biachini, Valadão, & Rosa, 2014). The optimum number of subsamples per sampling point was similar in both years studied, indicating good consistency of the results (Figure2).The surface layer (0.00-0.05 m) required a larger number of subsamples to adequately represent the SPR of the sampling point.Therefore, 30 and 35 subsamples were required from each sampling in years I and II, respectively, to maintain 75% of the sampling points with a maximum error of 10%.If all of the sampling points (18) achieve this accuracy (i.e., a maximum error of 10%), 44 subsamples need to be collected.The higher microvariability observed in the surface soil layer (radius 3 m) is due to various factors, including the soil and crop management practices, effects of plant roots, wetdry cycles, and the potential surface sealing that is commonly observed in no-tillage systems, especially when little straw is present (Cherubin et al., 2011;Silva, Bianchi & Cunha, 2016).High variation of microvariability of the SPR data was observed between sampling points (Figure 2).Thus, for some points in the surface layer, less than 10 subsamples were sufficient to obtain values with a maximum deviation of less than 10%.For the surface soil layer in both years, 11 and 5 subsamples per sampling point were sufficient to obtain SPR values with maximum errors of 20 and 30% around the mean, respectively.However, the surface layer of the soil under a no-tillage system is periodically disturbed during the opening of the sowing row, thereby minimizing possible physical restrictions (i.e., high SPR values) for plant root growth (Moreira et al., 2016).Thus, when the SPR is determined, the major interest of the technician is an evaluation of the compaction state of the subsuperficial layers of the soil (below 0.05 m), where the higher SPR levels restrict the growth of roots at depth and may limit crop development, mainly due to water stress (Tormena et al., 2002;Cardoso et al., 2006).In a study of the effect of high SPR values on soybeans, Cardoso et al. (2006) observed that impediments to root growth in the subsurface caused the root system to concentrate in the soil surface layer (0.00-0.05 m), which was the zone that retained the lowest water content, thereby negatively influencing nutrient absorption. For the deeper soil layers (0.05-0.30m), the optimum numbers of subsamples per sampling point were similar.At these depths, the collection of 15 and 18 subsamples was necessary to obtain a maximum error of 10% around the mean for 75% of the sampling points for years I and II, respectively.However, 35 samples needed to be collected for all sampling points to attain this accuracy level, demonstrating that the high SPR microvariability among sampling point sob served in the surface soil layer persisted in the deeper soil layers.Only 8 and 4 samples would be required if we allowed maximum variations of 20 and 30% around the mean for the layers between 0.05 and 0.30 m in all sampling points, respectively.However, for 75% of the sampling points, only 5 (20% error) and 2 (30% error) subsamples were sufficient. The use of reduced numbers of subsamples decreases the operational cost of field SPR sampling (less time-consuming).However, the low accuracy of the measurement (i.e., a higher level of error) can make the results less reliable and even technically unviable depending on the goal of the assessment (Tavares-Filho & Ribon, 2008).For example, suppose that the SPR in a soil layer at one specific point is 3.5 MPa and a 30% maximum error is allowed; in this scenario, the result of the evaluation could be between 2.5 and 4.55 MPa.These values encompass SPR values that are considered adequate for root growth or are highly restrictive.Allowing a 20% maximum error results in values that vary from 2.8 to 4.2, and allowing a maximum error of 10% results in a range of 3.15 to 3.8, which does not generate large differences in terms of soil management decisions.Thus, an increase in the robustness of sampling will increase the accuracy of the information and in turn support management decisions that prevent unnecessary soil disturbances to alleviate compaction and its deleterious effects on soil ecosystem services.Nevertheless, the increased operational costs involved in more intensive soil sampling should always be considered to ensure that the evaluation is financially feasible for the farmer.Based on these factors, the decision concerning the number of subsamples that should be taken in an SPR evaluation depends on the accuracy required by the farmer/consultant (goals of the assessment) and the capacity for investments. Descriptive statistics of the SPR sampling point data set The highest mean SPR values (close to 3 MPa) were obtained from the soil layers below 0.15 m in depth in year I (Table 1) and the 0.10-0.15and 0.15-20 m layers in year II (Table 2).Soil compaction in layers below the action zone of seeder disks and shank openers has been frequently detected in soils under the no-tillage system (Cardoso et al., 2006;Debiasi, Levien, Trein, Conte, & Kamimura, 2010;Cherubin et al., 2011;Moreira et al., 2016).The absence of soil disturbance, the low diversified cropping system and especially the systematic traffic of heavy machinery under soil moisture conditions favorable for compaction have been proposed as the main causes associated with soil compaction in no-tillage areas (Tormena et al., 2002;Debiasi et al., 2010). The analyses of the mean SPRs from the whole area (18 sampling points), from the different soil layers and over the two year period showed that the number of subsamples did not largely influence the results.Therefore, if the objective of the SPR evaluation was to obtain a general diagnosis of the area (traditional sampling), the number of subsamples would not influence the results.This finding corroborates the results obtained by Tavares Filho and Ribon (2008) and Molin et al. (2012), which indicate that 12-15 subsamples are sufficient to obtain satisfactory results using conventional sampling. However, the number of subsamples had an elevated influence on the amplitude of the SPR values (i.e., the range between the minimum and maximum values), resulting in a reduction of the amplitude of the data with an increase in the number of subsamples. A lower amplitude of the SPR values between the sampling points indicates that the value obtained for each sampling point when a higher number of subsamples is used more accurately represents the SPR mean since the microvariability of the area is better considered.Moreover, the errors resulting from possible sampling faults are diluted (Molin et al., 2012).For example, the maximum SPR value observed in year I using 15 subsamples was 3.27 MPa (0.10-0.15 m layer), whereas when using only one subsample the maximum at this location was 4.19 MPa.This evidence is important and emphasizes the need to correctly choose the number of subsamples when localized intervention in the field is guided by the spatial variability in the SPR as proposed by Girardello et al. (2014).The utilization of an insufficient number of subsamples can incorrectly indicate a need to conduct interventions in areas, which makes this type of management technically and economically inefficient (Tavares Filho & Ribon, 2008;Molin et al., 2012) Generally, the data show CV values classified as low or medium.The coefficient of variation values were classified with low variation (< 12%) when more than six subsamples were used (Warrick & Nielsen, 1980).The exception was the samples collected from the 0.00-0.05m soil layer, which were classified as medium (12 < CV < 62%), where only one subsample from years I and II reached values of 29 and 23%, respectively.Independent of the year of the study, a reduction in the number of subsamples resulted in an increase in the CV values (Tables 1 and 2).The observation of higher CV values is an indication of the existence of higher spatial variability of the attribute in that area (Oliveira et al., 2015), which requires the utilization of sampling plans that use a larger number of samples to faithfully reproduce the spatial variability at that location (Siqueira et al., 2014).Although we did not investigate different sampling grid sizes in this study, the results obtained indicate that using a higher number of subsamples per sampling point is an SPR mapping strategy to uses less dense sampling grids.This finding need to be confirmed in future studies. Quality of the thematic SPR maps in the function of the number of subsamples In the surface layer of the soil (0.00-0.05 m), we found a significant correlation with the reference maps (15 subsamples) when at least six and four subsamples were used in years I and II, respectively (Table 3).For the other soil layers, three (year I) and four (year II) subsamples were sufficient to obtain a significant correlation with the reference maps.For all of the layers measured, a reduction in the correlation coefficient was observed with a reduction in the number of subsamples collected.These results indicated that the maps obtained using lower numbers of subsamples per sampling point presented higher deviations in their estimates, thereby reducing the reliability of the information (Cherubin et al., 2015). In both years, the SPR maps for the surface soil layer presented the lowest correlations with the reference maps (15 subsamples).This result is due to higher variation in the SPR values in this soil layer, as was previously discussed for the item numbers of subsamples per sampling point.Tavares Filho and Ribon (2008) compared no-tillage and conventional tillage systems, and Storck et al. (2016) studied an integrated crop-livestock system; both studies also found that the 0.00-0.10m layer presented the highest variations and consequently needed a larger number of subsamples than any other soil layer to achieve a good level of reliability.However, considering that the surface layer of the soil under a no-tillage system is periodically disturbed during crop sowing, which minimizes possible physical restrictions to plant root growth, the decision of the subsample numbers per sampling point to provide penetration resistance measurements should be based on the microvariability of this parameter in the deeper soils layers (Moreira et al., 2016). The RDC results (Figure3) were similar to the results obtained for the correlation analysis for the two study years and all of the soil layers measured, with a correlation of -0.88 between the RDC and Pearson´s correlation.A high correlation (r = 0.96) between the RDC and the Kappa index, which is another procedure used to evaluate the similarity between thematic maps, has been shown in the literature (Bazzi, Souza, Uribe Opazo, Nóbrega, & Neto, 2008).Independent of the soil layer measured, there was an increase in the deviation (sampling errors) in the maps with a reduction in the number of subsamples.The highest RDC values were found in the 0.00-0.05m layer in year I, with a deviation of 21%, and in the 0.00-0.05and 0.05-0.10m layers in year II, with a deviation of 19%.The use of RDC analysis to compare SPR maps has not been documented in the literature, which characterizes this study as pioneering in this area.However, this coefficient has been used with success for other variables, such as grain yield (Bazzi et al., 2008;Coelho et al., 2009) and soil chemical attributes (Cherubin et al., 2015). To obtain RDC values less than 10% in the surface soil layer (0.00-0.05 m), 9 and 6 subsamples were necessary for years I and II, respectively.This number decreased to 4 subsamples for the deeper soil layers in both years.Because the RDC is calculated from the mean difference in the modulus of the interpolated values in relation to the reference map (Coelho et al., 2009), no RDC value is considered optimum, and the choice of the acceptable deviation coefficient depends on the degree of reliability desired by the researcher.In this study, an RDC of 10% was considered a suitable value that could guide the interpretation of the results, as suggested by Bazzi et al. (2008). Acta Scientiarum. Agronomy, v. 40, e34989, 2018 Table 3. Correlation between the soil penetration resistance (SPR) maps obtained with different numbers of subsamples (1-12) per sampling point and the reference maps obtained with 15 subsamples in two years in Palmeira das Missões (RS), southern Brazil. In the literature, divergent opinions exist regarding the SPR value that should be considered the critical limit for plant root growth.These values vary according to the characteristics of the soil, management practices and crops.Traditionally, SPR values between 2.0 and 2.5 MPa (Taylor, Robertson & Parker, 1966) are considered the critical limits for root growth.However, various studies have shown that plants tolerate higher SPR values (up to 3 MPa) in areas with no-tillage systems (Secco et al., 2009;Girardello et al., 2014;Moraes, Debiasi, Carlesso, Franchini, & Silva, 2014), probably due to the better soil structure and the greater presence of continuous biopores (Moraes et al., 2014).Independent of the critical limit considered, subareas with SPR values considered restrictive for root growth were detected using the mapping strategy.In this sense, this study could help farmers, consultants and researchers with decisionmaking regarding the sampling procedure that should be used for SPR evaluations in agricultural soils. Conclusion The number of subsamples used to obtain the soil penetration resistance that properly represents a sampling point depends on the level of error tolerated in the mapping, with a higher number of subsamples resulting in more accurate maps. A reduction in the number of subsamples promotes an increase in the variability of soil penetration resistance data.Generally, this study suggests that at least four subsamples per sampling point achieves soil penetration resistance maps with a coefficient of relative deviation less than 10% (30% maximum error per point around the mean) and significant correlation with the reference maps (15 subsamples). Figure 1 . Figure 1.Schematic representation of the sampling grid (50 x 50 m) used in this study area highlighting the distribution of the 15 subsamples of soil penetration resistance in each sampling point in Palmeira das Missões (RS), southern Brazil. Figure 2 . Figure 2. Number of subsamples per sampling point required to obtain soil penetration resistance (SPR) values with maximum errors of 10, 20 and 30% around the mean, for different soil layers in two years in Palmeira das Missões (RS), southern Brazil. Figure 3 . Figure 3. Relative deviation coefficient (RDC%) between soil penetration resistance maps (SPR, MPa) obtained with different numbers of subsamples (1-12) per sampling point and the reference maps (15 subsamples) for different soil layers in years I (a) and II (b) in Palmeira das Missões (RS), southern Brazil. Figure 4 . Figure 4. Thematic maps of soil penetration resistance (SPR) obtained by considering different numbers of subsamples (1-12) per sampling point and the reference maps (15 subsamples) for the different soil layers in two years in Palmeira das Missões (RS), southern Brazil. Table 1 . Descriptive statistics of soil penetration resistance (SPR, MPa) in the soil profiles obtained with different numbers of subsamples (1-15) per sampling point in year I in Palmeira das Missões (RS), southern Brazil. Table 2 . Descriptive statistics of soil penetration resistance (SPR, MPa) in the soil profiles obtained with different numbers of subsamples (1-15) per sampling point in year II in Palmeira das Missões (RS), southern Brazil.
2019-04-02T13:12:04.674Z
2018-02-09T00:00:00.000
{ "year": 2018, "sha1": "6cdf19150efc46121d9add1247b1d6e3784cb354", "oa_license": "CCBY", "oa_url": "http://periodicos.uem.br/ojs/index.php/ActaSciAgron/article/download/34989/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4fdfd47808694965b6a0566e3b6398bb534c198c", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Mathematics" ] }
239061680
pes2o/s2orc
v3-fos-license
Illusions and Realism in the History of Human Rights Review article of Frederick Schauer, The Force of Law, Cambridge, Mass . Harvard UP 2015; Allen Buchanan, The Heart of Human Rights, New York, Oxford UP 2013; Samuel Moyn, Human Rights and the Uses of History, New York, Verso 2014; Brian Tierney, Liberty and Law. The Idea of Permissive Natural Law, 1100-1800, Washington, CUA Press 2014 . There has been much talk about revisionism in the history of human rights recently . Most often this term has been associated with Samuel Moyn and his book The Last Utopia . Apart from that the label „new histories of human rights“ has also appeared in connection with two conferences taking place in the USA in 2015 .1 What is new about these new histories? In his influential book Moyn rejects the triumphalism of old-style narratives about human rights and argues that real human rights did not appear until the 1970s . Before that, the noble idea had only been abused to protect national states and their sinister interest . To be honest, I do not see much difference between Moyn’s alleged revisionism and the enthusiastic language of older human rights histories . The developments in „new democracies“ in Eastern Europe after 1989 and the Middle East after 2010 should have warned us that enthusiasm is a very misleading guide, and imitating the American rights-talk does not solve any of the difficult issues of a just political order in a real world . If the idea of human rights is to be of any help in maintaining democracies, it would be more appropriate to require more realism and less enthusiasm in histories of human rights . For I believe that if historiography is to make a meaningful contribution to the study of the phenomenon we call human rights, it has to turn away from illusionary approaches to realism . Illusions and Realism in the History of Human Rights There has been much talk about revisionism in the history of human rights recently . Most often this term has been associated with Samuel Moyn and his book The Last Utopia . Apart from that the label "new histories of human rights" has also appeared in connection with two conferences taking place in the USA in 2015 . 1 What is new about these new histories? In his influential book Moyn rejects the triumphalism of old-style narratives about human rights and argues that real human rights did not appear until the 1970s . Before that, the noble idea had only been abused to protect national states and their sinister interest . To be honest, I do not see much difference between Moyn's alleged revisionism and the enthusiastic language of older human rights histories . The developments in "new democracies" in Eastern Europe after 1989 and the Middle East after 2010 should have warned us that enthusiasm is a very misleading guide, and imitating the American rights-talk does not solve any of the difficult issues of a just political order in a real world . If the idea of human rights is to be of any help in maintaining democracies, it would be more appropriate to require more realism and less enthusiasm in histories of human rights . For I believe that if historiography is to make a meaningful contribution to the study of the phenomenon we call human rights, it has to turn away from illusionary approaches to realism . Three Illusions An investigation of human rights may be said to be based on illusions any time the researcher falls prey to one of the pitfalls of language . An "arm" as a human limb is not the same as an "arm" as a weapon, even though both words sound and look the same . In a similar manner, historians writing about human rights may often find themselves writing about something completely different than they intended . In case of an "arm", we may clear our misunderstanding by looking at the object we are speaking of; in the case of immaterial abstract expressions, such as "rights" and "law", sensory perception will not help us . Due to language, we may easily go astray . The necessity to grasp the immaterial object forces us, as historians, to use metaphorical language; the necessity to tell a story forces us to approach the subject as if we were recounting a biography or describing a fight . However, when we approach the subject with the illusion that human rights are an edifice having foundations, a plant having roots, or a person having a life story, or the result of a long struggle, we may easily find ourselves describing completely irrelevant things and misguiding our readers . One may perhaps think that human rights are like a building which must have a general philosophy of human rights as its foundations . 2 Someone else might believe that the building would be more stable if it were based both on morality and law, because two columns are better than one . 3 However, legal theory has already taught us that law and morality are two different normative systems and mixing them up would actually result in utter injustice . Someone else may think that it is the task of the historians to find the moment that human rights were born, and believe they have somehow been ´there´ as if they were living . 4 However, human rights are not human beings or animal species; they are an instrument of law . Take for example the freedom of the press, which has become a traditional part of civil rights catalogues in democratic constitutions . It also has a moment of birth . In England censorship ended in 1695, in France freedom of the press was declared in 1789, but further development in both countries shows that these birth-moments did not bring freedom of the press to life . In both countries the freedom of the press was restricted by various stamp and tax duties and other legal limitations . It was not until 1855 in England and 1881 in France that these limitations were cancelled . 5 Lastly, most of the works which pretend to deal with human rights actually focus on the fight for them . 6 These are also works telling stories of injustices that 2 Examples are given below in the discussion of Allen Buchan's book . 3 This seems to be the case with Catholic critics who regret that modern human rights have lost their metaphysical foundation and seek to blend law and morality . human rights are supposed to fight or describing human rights bills and institutions as results of past victories or weapons that are supposed to help to continue the fight . 7 What these narratives imply is that the guarantee for justice is the fight for human rights, but not the human rights themselves . They suggest that we do not need to bother about the legal context because human rights will somehow take effect once their enemies are defeated and human rights violators punished . Justice should then be guaranteed by a never-ending witch-hunt against human rights violators and oppressors of every kind . We may call these approaches an essentialist illusion, a biographical illusion and a military illusion . The only way to approach the subject of human rights in a realistic manner is, in my opinion, to approach them as a part of law . By this I do not mean that research should be restricted solely to positively enacted rights, but that rights should be considered as part of the social phenomenon we call law . This social institution is also immaterial but its boundaries and properties have already been sufficiently elucidated by centuries of theoretical discussions among legal theorists . We are certainly on much firmer ground when we ask what law is than when we discuss the nature of human rights, which are still taken as being above the law or as standing between law and morality . Sometimes human rights are treated as a kind of modern religion, which makes the whole concept open to similar abuses as other religions before them . Paradoxically, the military illusion may seem to attribute a great significance to law but it takes law only as a weapon against human rights violators . Law is, however, much more than threats and punishments . But which of the new histories of human rights are on the way towards realism? Which of them follow only the path of illusions? In what follows I should like to consider four new Anglophone books which made international impact . I will divide our investigation into four parts . In the first one I will introduce two new books that may be useful for the theoretical assumptions, then I will proceed to two recent works on the history of human rights . Part three discusses the question of (dis)continuity on the example of Brian the idea of new international human rights . Finally I shall discuss the benefits of a new potential cooperation between historiography and legal theory . Law and Men: Bridging the Gap The Force of Law, a new book by the American legal theorist Frederick Schauer, 8 is a good starting point for our discussion as it sums up in a neat fashion the current state of opinions on the question of what is law and what are the limits of its power over individuals . More importantly, it shows how difficult it is to bridge the gap between the orders of law and real human behaviour . Why is it important for human rights? If human rights are a part of law then even their power over individuals is limited . The actual aim of the book is to revisit the question of the role of coercion in law . This is a fundamental issue of legal theory which is closely linked to the question of the nature of law . The very first natural law theorists who sought to define the main features which distinguish law and morality as two different normative systems highlighted the entitlement to coerce as the essential feature of law . We already find these convictions in the works of Samuel Pufendorf 9 and Christian Thomasius . 10 In the 18 th century the force of coercion was called into doubt by Cesare Beccaria and some French legal reformers who veered to the opposite extreme and began to dream of a society without law . 11 This trend of thought was continued in the 19 th century by utopian socialists who supposed that proper moral education would make law obsolete and future societies would maintain cooperation even without law . 12 Other reformers inspired by Beccaria sought to reverse the logic of law as a normative system which punishes bad behaviour and began to ask whether it would be more effective to reward good behaviour . 13 In the 19 th century law as a system of orders backed by threats of punishment was reasserted by John Austin in England and Rudolf Jhering in Germany . Unlike their English colleagues 14 German legal theorists were already rejecting this notion around 1870 when Ernst Rudolf Bierling, Georg Jellinek and after them Hans Kelsen replaced this simplistic notion with more nuanced conceptions of law . 15 What may come as a sort of surprise is that this campaign against a coercion-based concept of law began with the curious Imperativtheorie of Karl Binding and August Thon . 16 Schauer's narrative, however, starts only with the moment when Herbert L . Hart rejected this simplistic notion in England . It was in 1961 with the appearance of Hart's epoch-making book The Concept of Law where Austin's command theory of law was criticized because it failed to see that law is a broader phenomenon than orders, prohibitions and punishments . 17 The concept of law had to be stretched to include power-conferring rules . Consequently Hart proved that the narrow concept of law as the commands of the superior is not consistent with the current usage of real law, and introduced a broader concept which implied that the essential aspect of law is not the threat of sanctions but recognition by members of society . Hart's work inaugurated the era of the dominance of the "soft" understanding of law which implied that law remains law even when it renounces sanctions . 18 Over time some theorists began to feel that this broad concept of law failed to explain the specificity of law and its difference from other normative systems . Schauer himself presents his book as a work which only accomplishes this revisionist development . He does not make the claim of being the pioneer reversing the tide but only a successor to recent trend, the beginings of which he dates to mid-1980s . 13 This was the case with Giacinto Dragonetti, Delle virtu e delle premi (On Virtues and Rewards), Naples 1766 . After him the subject was continued by Joseph Sonnenfels in his conception of Polizey-Wissenschaft . OPERA HISTORICA • ROČNÍK 17 • 2016 • č. 2 It is, however, not Schauer's intention to rehabilitate the surpassed conception of law as orders backed by sanctions . He approaches the problem with the knowledge of all the new conceptual and empirical findings about the effects of law on society . Especially the empirical experiments concerning the impact of law on human behaviour make his investigations interesting, for they show that the ability of law to compel compliance is far from granted . In other words, it cannot be taken for sure that the effect of new laws on human behaviour is guaranteed merely by issuing new legislature . Much more is needed to bridge the gap between the letter of law and real human behaviour . In dealing with the question of why people obey laws, Schauer examines culture-specific conditions of various nations, sociological surveys or psychological experiments and media . He draws on Tom Tyler's surveys in which it was proved that the fear of punishment plays only a marginal role in explaining compliance with law . Most people simply cannot be taken for the imaginary "bad man" who must be coerced to obedience by the threat of sanctions . Other studies concerning compliance with tax laws or daily experience with the behaviour of drivers in traffic show, however, that in both cases people do not comply with the rules voluntarily . The fear of punishment does play a certain role in their decisions . Experiments conducted by Stanley Milgram be-tween 1963 and 1974 prove, for a change, that people obey law because they tend to obey authorities . 19 Only rarely does law alone produce change in people's moral and policy views . 20 Most often changes in habitual attitudes are produced by media, movies or education . At the conceptual level, Schauer concludes that it is necessary to have a narrower concept of law because otherwise the question of why people obey the law would not make sense . The effects of law would be blurred by effects of other normative systems and cultural conditions . For this reason, it is advisable to return to the view that coercion is perhaps not essential but is still a central property of law . It is the main feature which distinguishes it from other normative systems . The main function of law is, as it seems, not to punish crimes but to coordinate behaviour in complex human societies . In this regard, Schauer's views on anthropology are fascinating . In recurrent reflections on the nature of the man for whom laws are written, Schauer concludes that people do not need to be forced to good behaviour by law . It is wrong to suppose that humans are motivated either by self-interest or by fear of punishment . However, Schauer does not subscribe to a utopian belief that people are good by nature and only society spoils them . He does believe that people often act on good intentions . The aim of law is then not so much to prevent bad people from doing bad things, as to prevent good people from what they (wrongly) think is good . 21 Even the social order does not need to be safeguarded by the threat of legal punishment, for people may voluntarily develop cooperative behavior . Schauer draws here on the game theory, especially Robert Axelrod's experiments, which demonstrated that even selfish individuals may develop cooperative behaviour . This "cooperative agreement" 22 seems to be at the root of people's obedience to law . On balance, Schauer's very intelligent and highly informative book demonstrates that law's power over people is very limited and that it is advisable to trust their own judgment . He also demonstrates that law may produce an effect on people even without a utopian transformation of human nature . It is in stark contrast to the exaggerated expectations about the power of laws enforcing human rights which we hear today from politicians and activists . Another unrealistic assumption of human rights historiography is the belief in the existence of the contemporary human rights theory . It happens quite often that historians follow the story of a set of particular rights because they believe that their personal choice corresponds with an uncertain contemporary human rights theory, or a generally shared understanding of human rights . The same implicit belief is often responsible for the exclusion of certain thinkers from a history of human rights . For example German theorists of natural law are often not included in the history of human rights on the assumption that the human rights they put forward were not real or that they were even faked . Historians of norms -whether law or ethics -simply select only those thinkers or works which fit into their subjective idea of what is good . This methodological problem has been already discussed once in the German legal history when Franz Wieacker sought to cope with the challenge of Hans Georg Gadamer's hermeneutics . 23 Hermeneutics makes us see that legal history may be based on a "hermeneutical circle" in which the legal historian investigates history merely to confirm his preconceived selection of "good ideas" . Nowadays historians might benefit from the insightful work of the American philosopher Allen Buchanan, The Heart of Human Rights, because he makes it his goal to deny belief in the existence of such generally accepted or "folk theory" of human rights today . 24 In Buchanan's view philosophers writing about human rights tend to overestimate the significance of their discipline . A range of American philosophers, including 21 Ibidem, p . 166 . himself (as he himself says with a great deal of self-criticism), have miscomprehended the relationship between philosophy and law because they approached the question as if the most important task were to create a philosophy of human rights while the creation of particular legal instruments for their protection were taken as a mere technical procedure in which philosophers were not interested . Buchanan believes that their conceptions were based on what he calls the Mirroring View . 25 In the last twenty years there have indeed been several general philosophies of human rights produced by American philosophers such as Alan Gewirth, James Nickel, James Griffin and Carl Wellmann which may have been based on this error . 26 The "mirroring view" is the belief that for every legal right there must be an antecedent corresponding moral right . According to this misconception, the task of a philosopher is to define the moral right, while jurists derive a corresponding legal instrument from this starting point . I am afraid this is true even of the work of the historian Johannes Morsink, who wrote extensively on the philosophy of the UDHR, and many other historians who sought to extract some sort of a philosophy from human rights bills . 27 Conversely, Buchanan argues that the heart of human rights is not these philosophical concepts but the international law of human rights . In other words, the relationship is reversed . The most important thing about human rights is international legal instruments because they provide universal standards for regulating the behaviour of states toward the citizens under their jurisdiction . For this reason Buchanan claims that the present book will be based on analysis of legal documents protecting human rights and the practices related to them . Unfortunately, this promise is not fulfilled . There are just two instances in the whole book when Buchanan quotes legal cases, as most of the book is conceived as a dialogue with other American philosophers and their concepts . 28 could find much more useful information about the working of UN human rights treaty bodies and their cases in the manual of the Jewish-Canadian activist Anne F . Bayefsky than in this academic work . 29 It is a great pity that Buchanan does not work with the data provided by the index of cases from these UN bodies . The data are available in the Universal Human rights index, and the working and weaknesses of this system would certainly be better revealed by an analysis of these data than by playing with scholarly definitions . 30 Buchanan apparently subscribes to the idea that the main goal of human rights is to protect individuals against their own states and that therefore the main efforts must be oriented at restraining the states . Like many supporters of the "international solution", Buchanan does not even consider the question of who would guard the guardians . 31 Th is, however, one of the elementary problems of any legal theory since the 19 th century . If the only problem is a proper moral supervision of the performance of the states, then NGOs are seen as bodies conducting this supervision . Buchanan does not ask who would control the NGOs . Transparency is discussed here as the main condition for a just state, but Buchanan does not ask whether NGOs themselves are transparent in their policies . 32 Let us remark that -while there are agencies checking the fi of NGOs or checking whether they devote their efforts to their declared goals -there is so far only one institution also supervising the political performance of international NGOs . This the NGO monitor at the Bar Ilan University which had been boycotted because its cooperation with Ariel. 33 Since the EU prohibit all cooperation with any Israeli institution in the "occupied territories", the NGO monitor is actually severely restricted in its efforts, and the idea of any supervision of international NGOs is taboo . The anti-state preconception underlying this work may also be responsible for the fact that Buchanan fails to see the limits of this system . Th UN human rights treaty system makes it possible for an individual or an organization to complain about a state, but not for a state to complain about an individual or a non-state entity . 34 are to take the rulings of these bodies as a moral compass, then we have to consider that they clearly show the state as a bad institution because they are designed solely for procedures against states . This is clearly a problem in present-day conflicts between a state and a non-state entity, such as the Gaza Strip, the Russian separatist republic in Ukraine, the Islamic state, or territories governed by terrorists in Mali and Libya . 35 Buchanan seems to be quite optimistic about this burning issue of international law because he deals with the "international legal system" as if it already supervised the performance of these non-state entities as well . 36 He never addresses the problems with the accountability of these non-state entities under international law . It is also regrettable that Buchanan focuses only on complaints and fails to consider the fact that the relationship between the UN bodies and states also involves peaceful monitoring . In this process states are obligated to submit to the UN treaty bodies' regular reports about their performance which are then kept on record and fact-checked . Since 2007 the UN have been conducting a universal periodic review of human rights performance of all member states . On the whole, while this part of Buchanan's monograph may be disappointing, it must be said that his treatise offers an interesting insider's view of the debates of American philosophers . What makes his investigation valuable for historical methodology is the introductory assumption that there is no generally accepted modern theory of human rights . The existence of the contemporary theory of human rights is a myth . In general, the academic discussion among American philosophers of human rights which Buchanan addresses looks like a polemic within a very closed world . Historians might perhaps derive more benefit from the philosophical work of a lawyer, the famous American advocate of Israel, Alan Dershowitz . In his not-sofamous book Rights from Wrongs, he proposes an experiential theory of the origin of human rights . 37 He rejects all externalist approaches which look for the origins of human rights in something outside the structure of humanly constructed law system (God, Nature, natural law) . In his view the source of human rights is mankind's historical experience of injustice . "[…] Rights are those fundamental preferences that experience and history-especially of great injustices-have taught ." 38 The source of rights is "the human ability to learn from experience and to entrench rights in our laws and in our consciousness ." 39 This experiential explanation is perhaps more consistent with complex historical changes then the static philosophical constructs . 35 This problem has been addressed mainly by Israeli lawyers; see Discontinuous histories of human rights Respect for the complexities of life is also a strong point of the historical method used by the American historian Brian Tierney who is most famous for research into medieval thought . He had already provided a complex account of the longrange history of human (natural) rights 40 in his book The Idea of Natural Rights, published in 1997 . 41 His last book Liberty and Law elaborates on one of the most important aspects of the account he had given in his previous historical survey . 42 The first book sparked off a minor controversy in which Tierney was accused of overrating the significance of medieval thinkers and stretching the beginnings of the "modern theory of human rights" too far into the past . 43 This obsession with the correct beginnings of the idea of true human rights seems to be quite common among historians . There has been quite 40 I do not distinguish a history of "natural rights" from a history of "human rights" . It is wrong to assume that both terms denote different histories as implied by Samuel Moyn, Giuseppe Mazzini in (and beyond) the History of Human Rights, in: Pamela Slotte -Miia Halme-Suomitari (edd .), Revisiting the Origins of Human Rights, Cambridge 2011, p . 119-139 . The idea that both phrases correspond to different concepts is a corollary of the biographical illusion . It is also incorrect to believe that "natural rights" (and its counterparts in other languages -iura connata, droits naturels, Rechte von Natur) were used only by premodern thinkers, whereas human rights only by modern thinkers . Human rights (derechos humanos) in Spanish were used as early as the 16 th century; the phrase rights of man already appears during the American Revolution and before Thomas Paine used it in his book written under the influence of French . The term iura hominum universalia was used by the German enlightener Christian Wolff, which was rendered into German as allgemeine Rechte . Younger German authors used phrases such as Rechte der Menschheit, Menschenrecht etc . French physiocrates used the phrase 'droits naturels de l'homme' . What is important are not these phrases and their "lives" but their function within a given legal discourse or a given legal theory . It is just necessary to beware that the term natural rights (jura naturae) may sometimes denote rights of men in the state of nature and not "innate rights" . Yet the developments of these legal theories is a larger topic which cannot be discussed here . The terms bürgerliche Rechte, droits politiques or Grundrechte (fundamental rights) denoted rights of citizens in real states . Their popularity in the 19 th century was connected with the rejection of natural law and the old idea of a state of nature . What is again important is the different legal theory behind these terms . an impressive variety of probable beginnings . Some thinkers see them in Roman law, some in the nominalist philosophy of William Ockham, some in the conciliarist theologian Jean Gerson, some in the modern Dutch thinker Grotius, some in Hobbes and still others in John Locke . 44 Tierney has been attacked by a colleague who defended the merits of John Locke . 45 This is quite paradoxical, for Tierney's way of writing is characterized by his reluctance to subscribe to any version of a neat continuous story of human rights . In Liberty and Law, he emphasizes -perhaps as a response to the polemic -that the history of natural rights cannot be "written as a grand narrative of an idea slowly ripening through the ages until it reached an impressive maturity in the work of some great thinker ." 46 In his view, each era had its own problems, to which the thinkers responded . Each era had to start -so to speakfrom a new beginning . Tierney does not subscribe to investigations of isolated notions, as the German Begriffsgeschichte does . The account in each book is broken down into isolated cases, in which Tierney gives the context of the discussion to which the texts belonged . Furthermore, Tierney rarely forgets to make the reader aware of the difficult structure of medieval texts, which serve as the more immediate context in which natural rights terminology occurs . When the nature of the inquiry requires it, Tierney also supplies an analysis of the author's life and work . His narrative method is a model of the contextual approach to the history of ideas . It is also a nice example of a narrative which does not follow illusions . It is interesting to see how Tierney copes with the question of beginnings . In Natural Rights he avoided a clear answer by starting with a criticism of the interpretations proposed by the French medievalist Michel Villey who rejected the notion that subjective rights were already present in Roman law and defended the claim that the birth-moment came with William Ockham's happy synthesis of nominalism and subjective rights . In Liberty and Law the story starts with the Stoic notion of adiaphora and the Church fathers because in this inquiry Tierney already went one step further and instead of the origins of the phrase "human rights" looked for the development of the idea that there is a broad field of actions which are neither prescribed as moral duties nor prohibited as evils, but legally permitted as something either praiseworthy or at least morally indifferent . Tierney's own answer to the question of beginnings respects the alterity of medieval intellectual culture(s) . He seems to agree with Villey's conclusion that Roman legal culture was not centred around the notion of subjective rights, even though Roman legal monuments do contain well-known definitions of jus and justitia articulated in terms of individual rights . 47 He sees the shift towards an emphasis on subjective rights in the Middle Ages, but does not agree with Villey's conclusion that it was Ockham who made the decisive break-through . In fact, Tierney does not acknowledge any "great thinker" or major text which would be responsible for this shift . Instead he locates the transformation in the long "flow of texts" between 1150 and 1310 . Yet between this date and the 14 th century the notion of subjective rights was debated in many different contexts by many different social groups which makes it difficult to maintain that this early development constituted a continuous process . At the beginning, it was the experts on canon law, then the French theologians who based their comments on Peter Lombard's Sentences, and in the 13 th century it was the age of Thomas Aquinas . However, their heritage was disregarded by 14 th century figures who already had to face the Great Schism and the struggle between Ludwig of Bavaria and the Papacy . Sometime between Rufinus' comment on Gratian's Decretum (around 1160), and Joannes Monachus's Grossa aurea (around 1310) the notion of subjective right as a power (potestas, potentia, vis, facultas) and as fas established itself and occupied a central place in European legal culture . Yet we must be aware that all these painfully reconstructed debates only eluci-date the role of "rights", but not the role of "human rights" . Medieval thinkers did not argue that these rights are innate and that they belong universally to all mankind . Each of these periods had its own major issues and specific contexts . In Liberty and Law Tierney put even more weight on the discontinuity between these contexts . They are important if we do not want to examine only the history of the isolated phrase "natural rights", but if we also desire to understand how it was employed in the communication processes of the era . Tierney reveals in both books the significance of the debate on Franciscan poverty, which also gave impulsion to Ockham's writings . Yet in terms of continuity he is eager to identify the links between these medieval developments and early modern thinkers . He obviously stresses the Spanish theologians of the school of Salamanca, the debate on the Indians' rights and the work of the Jesuit Francisco Suarez which already responded to the challenge posed by early modern absolute monarchy to the autonomy of the church . Another strong link between medieval traditions and modern age is the system of moral laws proposed by the German enlightener Christian Wolff . Tierney has perhaps overstated his case in claiming that "Wolff 's work could thus be seen as a version of Thomistic teaching brought up-to-date for a modern readership", 48 but he is certainly right in noting 47 Institutiones 1 .1 . and 1 .3 . 48 B . Tierney, Liberty, p . 316 . However, Tierney has discovered that Wolff´s definition of jus as facultas agendi is borrowed from Suarez . Even though Wolff never quotes Suarez by name, this is a piece of hard evidence of continuity between these two thinkers . OPERA HISTORICA • ROČNÍK 17 • 2016 • č. 2 that his system of natural laws and natural rights provides the most coherent account of the mutual relationship between prescriptions, prohibitions and permissions . 49 In this regard, Wolff concludes centuries of the development of an idea . Wolff also coined the notion of "permissive natural law" which became the subject of Tierney's last book, subtitled The Idea of Permissive Natural Law, 1100-1800 . I am convinced -on the basis of my own research on 17 th and 18 th century texts on natural law -that Tierney's conclusion about the crucial importance of permission is correct, and the path he is following in his book is a step in the right direction . People who only follow the history of isolated rights or believe in the existence of a modern theory of rights which somehow moulds them into a coherent whole may not realize that the idea of subjective rights is at odds with the existence of an objective legal order . It is only when we understand rights as a broad area of what is permitted under law that they may be reconciled . Tierney investigates the uses of permissive law in both of his books, but in Liberty and Law he stretches the chronology of his investigation up to Immanuel Kant and the end of the Enlightenment . However, I am afraid that Tierney has not noticed that the notion of permissive law in this sense fell into disrepute after Wolff . His German successors reduced law to commands backed by sanctions, whereas laws that merely permit were not real laws in their view . 50 In Kant's legal philosophy Erlaubnisgesetz denoted merely the general ability of a human being to bind other humans (i .e . to be a bearer of rights) . 51 Tierney's impression that Kant's reasoning suffers from a tension between an old tradition and a new philosophy which led to an "awkward antinomy" and "an impasse Kant sought to avoid" is based on a misconception of Kant's moral philosophy . 52 For Kant, the conjectural history of a transition from a state of nature into a civil state did not serve as a logical argument with which to explain the grounding of legal obligations . The "natural law" in Kant does not prohibit and permit the same thing at the same time, for Kant did not use the highest law (or the imperative) as the highest premise for a deduction of lower duties and rights . We should not forget that he had made the famous turn towards the subject . In moral reasoning, the subject has to decide whether whatever he/she is doing may be used as a general law upon which everybody could act . We may illustrate it by the example of his argument against suicide in the In legal reasoning, the aim of the general law was merely to prevent individuals from encroaching on each other's freedom, not to follow any moral goals . Law only coordinates the social life of people inhabiting the Earth, for the basis of law in Kant is not only the Erlaubnisgesetz, which is to be understood as the capacity of each human to bind other humans by law, but also the fact that the Earth where mankind lives is finite . Kant was later misunderstood as a kind of deficient idealist . The experiential basis of his practical philosophy was somehow ignored by 19 th century thinkers . In law, German lawyers once again had to accommodate the tension between subjective rights and objective legal order . This question became a topical problem after the defeat of the revolution of 1848 . An International School of Human Rights? The natural law thinkers of the 17 th and 18 th centuries have been researched by other historians, among whom we should at least mention Knud Haakonssen, Diethelm Klippel and Frank Grunert and a number of others who have recently organized themselves into the network Natural Law 1625-1800 . 54 It should be noted that there has always been some reluctance to admit early modern thinkers as being a part of the history of "real" human rights, for they have always been suspected of rather being supporters of the absolute monarchy . 55 However, Samuel Moyn has recently cut the Gordian knot of nuanced interpretations and declared that the real history of human rights starts only with the "human rights revolution" of the 1970s . The general historical survey of this new historical conception was published in 2010 in a book with the telling title The Last Utopia . 56 The title comes from Moyn's conviction that human rights are the last utopia that has been left for mankind after other utopias before them failed . It also implies that utopian dreams are a good thing . After this book, the utopian conception has recently been diffused in an impressive number of collective volumes and articles by Moyn and his German followers Stefan Ludwig Hoffmann and Jan Eckel . One of them is a collection of articles entitled Human Rights and the Uses of History, which will be discussed below in more detail . After this collection of articles, Moyn published another short book on Christian human rights 57 which documents the earlier stage of human rights in the 20 th century . Furthermore, his voice has been echoed by German historians Stefan Ludwig Hoffmann and Jan Eckel . 58 In 2012 a themed issue of Geschichte und Gesellschaft was edited by Hoffmann and devoted to the penetrating insights of Samuel Moyn . 59 In 2015 the idea that real human rights were born in the 1970s was elaborated in the collective volume The Breakthrough, edited by Eckel and Moyn . 60 The epoch-making significance of Moyn's insights was further praised by Stefan Ludwig Hoffmann in a theoretical article published in 2016 in Past and Present . 61 To an East European reader, Moyn's works convey extremely interesting insights about the American background of the 1970s human rights campaign . I consider that even Czechoslovak sources confirm that Communist regimes per-ceived Jimmy Carter's human rights campaign as the greatest threat, greater than the Helsinki process or domestic dissident movements . Moyn's works are also a valuable source of information on current American historiography, on which he comments extensively . Of course, one may wonder whether human rights historiography really began only with Lyn Hunt and her book Inventing Human Rights (2007), as Moyn asserts . 62 What would Tierney and many others think about this? The greatest difficulty for me is Moyn's conception of human rights -to be exact, I mean the fact that he does not have any conception at all . He never explains anything about the content, composition or function of rights in law, or any of the many technical problems concerning any of the legal concepts of human rights . His work is a glaring example of an approach which is based on the biographical illusion . Paradoxically, he often uses the metaphor of birth, "death at birth" or "stillborn child" . The claim that real human rights were not born in the 1940s, as hitherto assumed, but in the This is closely linked to one omission in Moyn's historical account of the 1970s . He never considers the fact that even socialist states developed their own culture of human rights . 63 All the socialist states had long catalogues of (civil) rights in their constitutions which were allegedly based on a "socialist conception of human rights" . This theory was intentionally elaborated by a number of legal theorists who would even seek to address Western readership in books published in English . 64 This effort actually intensified after Jimmy Carter's human rights offensive . Socialist states responded with a wave of scholarly works on the "socialist conception of human rights" which were presented as more genuine than the capitalist fake rights . 65 The existence of this socialist conception of human rights which was intentionally presented as an alternative to the "Western concept" changes at least the factual description of what was happening . There was not a conflict between those who endorsed human rights and those who rejected human rights; there was a conflict between different concepts of human rights; and the acknowledgement of this fact makes it clear that even historians writing on this period should address the problem of the concept of human rights which Carter and the activists defended . 66 Moyn tells us only when the breakthrough happened and who were the actors . The closest to answering the question about the new concept is the chapter The Purity of this struggle in The Last Utopia . 67 63 Besides, socialist states massively influenced the drafting of key UN documents . For example, the two international conventions of 1966 no longer contain a right to private property, but they do contain a longer list of economic rights, even though some of them are rather strange from a global perspective . If we ask what was new about the new and true human rights that appeared in the 1970s, then Moyn's answer seems to rest on two claims . Firstly, the new human rights were real because they were international and the new NGOs made it possible to supervise the moral conduct of states . Secondly, they were real because only the activists who came in the 1970s finally had pure hearts and, unlike their miserable predecessors, they saw human rights as a universal value . The first claim implies that the real goal of human rights is to fight against the state . The second claim reveals that Moyn's own idea of real human rights is actually based on a certain vision of human nature . In his utopia justice would not be guaranteed by law but by the morality of "nice persons" in political NGOs . If the solution to achieving a better world is to be the creation of a higher supervision above the individual states, then it suffers again from the old problem of who would guard the guardians . 68 Why should we trust activists from NGOs? Why should they be a better guarantee of justice than law and state? If anyone believed that, then at least the disastrous Durban conference of 2001 should have been a warning showing that the moral goals of the international NGOs are somewhat problematic . The book Human Rights and the Uses of History does not alter this attitude . It is a collection of articles which had been previously published in the journal The Nation . Most of them are actually just long book reviews -to be exact, six out of the eight chapters . In spite of the promising titles, we do not learn very much about the topics discussed, for they are mainly focused on evaluating the performance of the author under review . The genre of the book review does not give Moyn the opportunity to elaborate his own opinions on these problems . In the introduction, we find quite reasonable opinions on the function of history, but they are not followed in the book itself because Moyn still follows the lunatic utopian agenda and mostly reaffirms what had been said in The Last Utopia . In the "Epilogue" he explains that he does not actually want realism; he holds it for a good thing when human rights are hidden behind a fog of utopian dreams because only such vague utopias can motivate people . " […] my worry is that human rights have conformed too much to reality. The utopian challenge presented by human rights has proved so minimal that they easily became neutered, and were even invoked as excuses -for example, in wars serving other interests-for choices their original advocates did not intend ." 69 Nations in 2012 . 70 This was a very unfortunate idea which did not help in any way and only damaged the value of previous negotiations with Israel . Moyn wrote at that time an article, Face the Nations, in which he supported the Palestinian plan . 71 What is useful is the chapter Human Rights in History, which sums up in a nutshell what was said in the previous book . We learn again that previous human rights thinkers invoked human rights to "found a nation-state of their own, not to police someone else's" 72 , that contemporary human rights have nothing to do with European natural law, revolution, slavery or the Holocaust, 73 that Carter has merit for invoking the concept "for purposes it had never before served", 74 that we once again find an absolutely uncritical assessment of NGOs and their achievements, and that human rights are somehow connected to efforts to "transcend politics" . 75 Unfortunately, Moyn does not follow in this collection of essays the history of the "rights talk" in the media . However, the strongest argument in favour of his thesis of the human rights revolution in the 1970s was the statistics documenting occurrences of the phrase "human rights" in the English and American newspapers . 76 Perhaps this was what was really new about human rights in the 1970s . It had become a media format, a way to communicate complex moral assessments in an easy and economical manner . Such issues would otherwise require much more complicated expressions . Conclusion: History as a Limit on Fantasies History and law have profited from a reciprocal cooperation at several decisive moments in their developments . In the 17 th and 18 th centuries history helped legal science to liberate itself from the heritage of Roman law, and legal theory too certainly profited from the transition from conjectural histories which were common in works on the social contract to real histories which reconstructed the true origins and developments of law in European states . Possibly one of the most fruitful periods in legal theory was the time of the disintegration of the historical school of law in the second half of 19 th century because it was also the time of a happy symbiosis between historical research and theoretical reflections on law . People like 70
2021-10-20T16:10:07.704Z
2016-09-30T00:00:00.000
{ "year": 2016, "sha1": "e0e941f9e21f4458310ad96e80a8483bf80db719", "oa_license": "CCBY", "oa_url": "http://opera-historica.com/doi/10.32725/oph.2016.026.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "d8eb05bdc16c77a5c9b1e7795ea0b497086cf250", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
126347437
pes2o/s2orc
v3-fos-license
Kinematic Analysis of Four-Link Suspension of Steering Wheel by Means of Equation Sets of Geometrical Constraints with Various Structure In research of the kinematic and dynamic properties of complex mechanical set-ups, results of numerical experiments are used. It is required to minimize the calculation time of various problems in the domain. For the multi-link suspension of the steered wheel, sets of the equations of the geometrical constraints were presented in two structurally different forms, scalar and vector. The vector set consists of the transcendental equations. Their solution was possible after previous expanding the trigonometric functions into power series. Because of the finite amount of the computer memory for the algorithm solving the vector form, it was possible to obtain solutions consisting of three terms. The number of terms in power series of equations' solutions determines the magnitudes of increments of input parameters (degrees of freedom). In this paper it is demonstrated, that fulfilling of this demand is possible by the change of the geometrical constraint's structure of the multi-link wheel suspension system. Introduction In theoretical problems in the domain of kinematics, guidance systems of steering wheel relative to the car body are represented by the spatial mechanisms with two degrees of freedom. The structure of these mechanisms is various. Their kinematics is solved by means of several different methods [2]. In the case of McPherson suspension and with two diagonal beams the matrix analysis is used [7]. Nonlinear sets of equations describing the geometrical constraints on the relative movement of the elements of multi-link suspension system are formulated in the scalar [3] or vector [6] form. These sets are usually solved by means of numerical methods. They can also be solved by use of the perturbation method [1]. The latter one makes possible to present the solutions of nonlinear sets of geometrical constraints as a series. In papers [4,5] from the domain of steered wheel's suspensions usually kinematic characteristics are analysed. These characteristics constitute the intersections of the spatial characteristics. These are usually the dependences between the relative change of wheel track and wheel camber and steering angles and levels of freedom, from which one has a fixed value. Such analysis is not precise, because in the wheel suspensions movement's space the mechanism's singularity can occurred. In problems from the domain of the analysis of suspension synthesis and car dynamics modelling, calculation time minimization is desired. Because of this, a choosing the proper solution method is very important. The wheel suspension system synthesis consists of several stages. At first, the structure of the suspension mechanism should be defined. Next, it's geometrical parameters are determined it means the coordinates of links joining beams with the body and with the stub axle or with the wheel support. Placement of these elements in the suspension movement's space follows the designed suspension's configuration. The synthesis of the steered wheel mechanisms is performed with the simultaneous change of two parameters. Usually these are the vertical translation of the wheel's centre and translating the rack of the steering gear [6]. Examination of the dynamic properties of designed cars is possible by means of the mathematical models with many degrees of freedom. In these models, suspension systems with different structure are considered. The range and purpose of the paper In the paper there will be presented two ways of solution of the multi-link steered wheels' suspension system kinematics with two degrees of freedom using the perturbation method [1]. In the first one, the set of 17 equations representing geometrical constraints of relative displacements of the suspension's elements will be formulated in the scalar form. In the second one, 5 equations in a vector form. The main aim of the work will be a determination of the spatial kinematic characteristics of the four-link steered wheels' suspension mechanism. It will be presented the quantitative dependence between time of effective numerical computation and the structure of the geometrical constraints of the examined mechanism. Structure of multi-link steered wheels' suspension system In Fig. 1 it is presented the diagram of the structure of the four-link steered wheels' suspension system. Points B1, B2, B4, B5 are centres of the ball-joints linking the beam with the stub axle. Point B3 is a centre of the ball-joint linking the extreme stub-axle mechanism's rod with the stub-axle beam. Figure 1. Scheme of the four-link steered wheel suspension mechanism . Points A1, A2, A4, A5 are the centres of the ball-joints which replace the silent-block joints linking the beams with the body. Point A3 is a centre of the ball-join joining the stub-axle mechanism's extreme rod with the steering gear's rack. The lower front beam, represented in the figure by the join A1B1, is joined with the stabilizer in the point W1. In the point C1 this beam is joined with the telescope column supporting the body in the point A6. Points B6 and B7 represent the wheel's rotation axis. The sets of coordinates {N}, {O1} are connected with the body and the wheel's stub-axle respectively. Equations of the mechanism's geometrical constraints Geometrical constraints equations of the above presented suspension mechanism can be formulated as the set of 17 or 5 nonlinear algebraic equations. In the first one the equations express the squares of the distances between characteristic points of the mechanism = , for = 1 5 In the above set, the input parameters are coordinates q3 of the point O1(q1, q2, q3) and translation of the rack uz, added to the coordinate yA3 of the point A3 (xA3, yA3+uz, zA3). At given parameters q3 and uz from the set (1) the coordinates of points Bj (xBj, yBj, zBj), for j=(1)5 and coordinates q1 and q2 of the point O1 are determined. In the second method, the equations express the squares of length of vectors with origins and ends in points Aj, Bj, for j=(1)5 respectively. They are presented in the form: . . At given parameters q3 and uz from the set (1) are calculated the coordinates q1 and q2 of the wheel centre O1(q1, q2, q3) and rotation angles {O1} respectively to {N}: α, β, γ. For ensuring the equivalence of the calculation range of the algorithms derived on the basis of sets (1) and (2) it is necessary to determine the rotation angles {O1}: α, β and γ respectively to {N}. So, the set (1) must be supplemented with the calculations of the mentioned angles. After calculation of coordinates of points O1, and Bj(j= (1)5) Next, having the rotation angles {N} respectively to {O1}, the A B. vector's coordinates were calculated. The unit vector of the wheel rotation axis C = 7 D E D F D G< ' and steering and camber angles of the wheel: Calculation of the angles δk and γk in both algorithms is analogous. Solution of the equations' sets of the suspension mechanism's geometrical constraints The solution of sets (1) and (2) were obtained by means of the perturbation method [1]. In respect to the transcendental set (2), trigonometric functions were resolved into trigonometric series: The resulting set of equations can be presented in the general form: ` a ! , a , 4, 3, / = 0, = 1 5 (12) Solutions of the set (12) were split into nonlinear and linear parts: ` a ! , a , 4, 3, / +` b a ! , a , 4, 3, / = 0, = 1 5 Nonlinear parts of these equations were multiplied by the perturbation parameter ε and an auxiliary set of equations was obtained L c, a ! , a , 4, 3, / = c` +` b , = 1 5, For ε=1 the sets (13) and (14) are identical, but for ε=0 the set (13) it is formed only from the linear part. It was assumed, that the solutions of set (14) The set (16) The set (1) was solved analogously. Numerical example The characteristic points were assigned to the designing placement of the suspension mechanism from Summary For the multi-link suspension of the steered wheel , sets of the equations of the geometrical constraints were presented in two structurally different forms, scalar and vector. The vector set consists of the transcendental equations. Their solution was possible after previous expanding the trigonometric functions into power series (10) and (11). Because of the finite amount of the computer memory for the algorithm solving the vector form, it was possible to obtain solutions consisting of three terms (18). The number of terms in power series of equations' solutions determines the magnitudes of increments of input parameters (degrees of freedom). Solutions of the set (1) are numerical series containing 10 terms each.
2019-04-22T13:03:16.914Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "2aae8a07739d1b6cd176698099005db5a8fbc708", "oa_license": null, "oa_url": "https://doi.org/10.1088/1757-899x/148/1/012013", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "84462329515827cc828a4b4a34583b658b8f0b7d", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
222171979
pes2o/s2orc
v3-fos-license
Development of a 3D-printed testicular cancer model for testicular examination education Introduction: Testicular cancer is the most commonly diagnosed malignancy in young males. Testicular examination is a non-invasive and inexpensive means of detecting testicular cancer at an early stage. In this project, a set of 3D-printed models was developed to facilitate teaching testicular examination and improving understanding of testicular malignancies among patients and medical learners. Methods: Five scrotum models were designed: a control model with healthy testes, and four models containing a healthy testicle and a testicle with an endophytic mass of varying size. The anatomy, texture, and composition of the 3D-printed models were refined using an iterative process between the design team and urologists. The completed models were assessed by six urologists, two urology nurse practitioners, and 32 medical learners. Participants were asked to inspect and palpate each model, and to provide feedback using a five-point Likert scale. Results: Clinicians reported that the models enabled accurate simulation of a testicular examination involving both healthy and pathological testes (x̅=4.3±1.0). They agreed that the models would be useful teaching tools for both medical learners (x̅=4.8±0.5) and patients (x̅=4.8±0.7). Following an educational session with the models, medical learners reported improvements in confidence and skill in performing a testicular examination. Conclusions: 3D-printed models can effectively simulate palpation of both healthy and pathological testes. The developed models have the potential to be a useful adjunct in teaching testicular examination and in demonstrating abnormal findings that require further investigation. CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 2 © 2020 Canadian Urological Association Introduction Testicular cancer is the most commonly diagnosed malignancy in men aged 15-44 years living in North America.1 Testicular cancer is highly amenable to treatment when caught at an early stage,2 with a five-year survival approaching 100% in patients diagnosed at stage I.3 In contrast, patients who miss the early window and present with a stage IV metastatic diagnosis have a fiveyear survival of 74%.3 Early detection and management of cancerous testicular lesions is thus critical in optimizing patient outcomes. As testicular cancer often presents as a painless mass in the scrotum and/or testicular swelling,4–6 testicular examination is a non-invasive and simple test to identify such pathologic findings at an early stage. However, past research has shown that only 5% of medical students beginning their clinical clerkship training,7 as well as only 36% of pediatrics residents,8 identify as comfortable with their testicular examination skills. Accumulating evidence also suggests potential benefits of testicular self-examination (TSE) as a screening test given its privacy and convenience;9,10 however, previous surveys amongst male college students report that only 41% are taught TSE and only 8% perform the examination with regularity. Such results highlight an educational gap amongst both medical learners and patients, as well as a need for novel tools that will facilitate such education. One technology that enables the development of high-fidelity educational tools is 3D printing. This technology has the distinct advantage of demonstrating spatial relations between anatomically-accurate structures, making it well-suited for use in both medical training and patient education.11–13 The use of 3D-printed models as an adjunct to existing techniques has been found to be preferred by patients, as well as beneficial in improving their understanding of relevant anatomy.14–18 Similar advantages have been demonstrated in medical education, where 3D-printed models have been effective in improving anatomy and clinical skills curricula.19–22 Despite the reported accuracy and educational benefits of 3D-printed anatomical models, no previous study (to the authors’ knowledge) has investigated 3D printing as a means of improving testicular examination education. To this end, the presented study describes the development of a set of 3D-printed models designed for the purposes of teaching testicular examination and improving understanding of testicular malignancies amongst patients and medical learners. Methods A multidisciplinary team comprising urologists, engineers, and medical students collaborated to conceptualize a set of 3D-printed testicular cancer models. Five models were designed, with each simulated scrotum containing either a) two healthy testicles, or b) one healthy testicle and one testicle with an endophytic lesion of varying size. The endophytic lesions would be placed at varying locations within the pathologic testicle and would range in size from 5% to 60% of the total size of the testicle. Moreover, larger testicle models were used for bigger masses, simulating the testicular swelling that is often concurrent with testicular lesions. CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 3 © 2020 Canadian Urological Association An initial prototype was designed and developed to simulate a normal human scrotum, testes, and epididymides. An iterative design process between the design team and urologists allowed for extensive refinement of the models in terms of anatomical accuracy. The urologists were presented with three printing materials – Dragon Skin 10 NV, Dragon Skin Fx-Pro, and Smooth-On Ecoflex 00-30 – and asked to select the printing materials that they felt best simulated the texture and consistency of a scrotum, testicle, and epididymis. The design team used the urologist feedback to further optimize the developed model, and this process was repeated until the urologists agreed that the model accurately simulated a human scrotum both on inspection and palpation (Figure 1). Next, a similar process was used to develop a series of models containing a cancerous testicle. Testicular masses were simulated by first casting an irregular shape in a denser material than that used for the testicle. This irregular mass was then suspended in the mold used for the testicle during the casting process. This enabled an irregular mass to be placed within the testicle and to be appreciated on palpation. The urologists were presented with individual samples casted in thermoplastic elastomer and polylactic acid polymer and were asked to select which material best simulated a testicular malignancy. The size, texture, and location of the masses were also guided by frequent input from the urologists. An iterative process was repeated until the urologists were satisfied that the models accurately simulated palpation of a cancerous testicle (Figure 1). All aspects of the model were designed in OpenJSCAD and finalized using MeshMixer. All models were casted using a Prusa MKS3S printer. Once the set of testicular cancer models had been developed, two separate sessions were held to ascertain feedback from both clinicians and medical learners. In the first session, a group of urologists and urology nurse practitioners was asked to visually inspect and palpate the developed models. In a second session, firstand second-year medical students were provided with a brief tutorial on testicular examination by a staff urologist. Of the medical learners, the first-year medical students had not yet received formal urological education as part of their undergraduate medical curriculum, while the second-year medical students had received one hour of didactic teaching on testicular cancer and two hours on urologic examination skills. These learners were then asked to practice their examination skills using the developed models. After using each model, participants in both sessions were asked to complete a survey in which they were asked to rate their agreement with several statements using a five-point Likert scale. Items on the clinician survey primarily focused on the anatomical accuracy of the developed models, the usefulness of the models in simulating a testicular examination, and the overall applicability of the models as teaching tools. The survey for medical learners featured additional items relating to the preand post-session levels of both skill and confidence in performing a testicular examination. Clinicians and medical learners were also asked to select, from a list of five potential applications (e.g. medical student training, resident training, nurse practitioner training, family physician training, patient education), all of the purposes for which CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 4 © 2020 Canadian Urological Association they felt the models would be beneficial. Lastly, both surveys included a section for participants to provide qualitative feedback relating to areas for improvement and the potential clinical applicability of the developed models. Results Through the multidisciplinary design process, it was determined that Smooth-On Ecoflex 00-30 silicone was the most anatomical representative material for the model’s scrotum and testicle texture. Similar evaluation of materials for the testicular pathologies concluded that polylactic acid polymer was the most anatomically representative material (Figure 2). A total of six urologists and two urology nurse practitioners participated in the study. Responses from the surveyed urologists and urology nurse practitioners are summarized in Table 1. All surveyed participants agreed that the developed models would be useful teaching tools for both medical learners and patients. They also agreed that these models would be beneficial for resident, nurse practitioner, and family physician training. Respondents felt that the models enabled accurate simulation of a testicular examination. All participants agreed that the models effectively simulated palpation of healthy and pathologic testes. Qualitative feedback concluded that the majority would use these models as a teaching aid for both patients and medical learners. In addition, all agreed that they would incorporate these models in some way into their practice. In terms of areas for improvement, respondents suggested that the epididymis could be more prominent to more accurately simulate palpation. These recommendations were used to further optimize the anatomical accuracy of the testicular cancer model (Figure 1, Figure 2. C). Survey data were collected from 32 medical learners. Of the 32 learners, 26 were firstyear medical students and 6 were second-year medical students. The results of the medical learner survey are summarized in Table 2. The proportion of medical learners identifying as confident in performing a testicular examination increased from a pre-session value of 6.3% to 84.4% following the session (Table 2). A similar effect was demonstrated in the proportion of learners claiming that they possessed the skills to perform a testicular examination, which increased from 6.3 % to 100% (Table 2). The majority of medical learners felt that the use of the testicular models would be helpful in the current medical school curriculum. In addition, most medical learners believed that these models would be beneficial for medical student training (96.9%) and patient education (90.6%). The majority of medical learners stated that these models would be useful for resident training (75.0%), nurse practitioner training (84.4%), and family physician training (75.0%). Medical learner qualitative feedback concluded that the testicular models were helpful in identifying a testicular mass on palpation, practising palpation technique, and differentiating between a pathologic and healthy testicle. Discussion The presented 3D-printed testicular models have the potential to improve testicular cancer education for medical learners and patients. From the survey of medical learners, the models CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 5 © 2020 Canadian Urological Association were shown to increase both skill and confidence relating to the performance of a testicular examination. Both medical learners and clinicians agreed that these models would be a beneficial addition to the existing medical learner urology curriculum. Learning to perform a genitourinary examination on actual patients can be uncomfortable for both the patient and the learner. Alternative teaching modalities can potentially relieve anxiety associated with the standard apprenticeship method of teaching urologic examination skills, as well as create a safe environment for learners to refine their skills.7,23,24 A study by Kaplan et al. stated that, after medical students were given an intensive examination skills course using a standardized patient, 90.3% of students reported being significantly more comfortable with performing a testicular examination. These medical students reported that learning a testicular examination was one of the most useful urologic skills to learn in a standardized environment.7 Medical learners have also previously reported a preference for practising examination skills on male anatomical models rather than standardized patients.23 In fact, one study found crude handmade models simulating testicular swelling pathologies to be beneficial in teaching urologic examination skills to medical students.24 As using a standardized patient can be costly and logistically burdensome, simulated education tools offer a convenient alternative that may improve testicular examination education.23 The survey data also suggested that the developed models would be beneficial in educating patients about TSE and testicular malignancies in general. It has been previously reported that there is a significant lack of education in the general public surrounding testicular cancer and TSE.25–27 Misinformation relating to urologic cancers can be easily spread through social media.25,26 A recent publication by Yeo et al. suggested that, with the increase in patients acquiring information about testicular cancer from sources that are not validated or credible, direct patient education has become even more important.28 Directing patients towards validated testicular cancer education programs is beneficial for patient-physician communication, as well as overall patient understanding. The testicular cancer model described in this study can serve as an adjunct for patient education and facilitate discussion regarding the benefits of screening and the risk of testicular malignancy. Currently, the number of testicular cancer models available on the market is limited. The available models are expensive and fail to accurately show disease progression. From a review of the current products available on the market, a progressive set of five testicular cancer models would cost approximately $875 Canadian Dollars (CAD), while a male pelvic trainer would cost approximately $3700 CAD. The use of 3D printing offers a unique solution to the high cost of existing models. The material cost of the five progressive testicular cancer models presented in this study was approximately $13 CAD ($10 of Smooth-On Ecoflex 00-30 silicone and $3 CAD of polylactic acid polymer). That is a direct cost savings of $862 CAD. In addition, many universities and public libraries now offer 3D printing services that could be used to print the developed models at a low cost. Accordingly, 3D-printed technology allows for anatomically CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 6 © 2020 Canadian Urological Association accurate models to be made at a fraction of the cost of existing models; thus, mitigating the financial barriers currently associated with quality testicular cancer education. Using 3D-printed models as an adjunct to existing techniques is rated as a highly valuable learning tool for patients, as well as beneficial in improving their understanding of relevant anatomy and surgical complications.14–18 In addition, truly patient-informed decision-making is predicated on a basic understanding of pathology and anatomy, a process that can be facilitated with various patient education aids. This technology has been shown to be a successful means of educating patients in urology.17,29,30 In a recent systematic review performed by Lupulescu and Sun, 27 studies analyzed the use of 3D-printed technologies for preoperative surgical planning and patient education in renal surgery. This review found that patient-specific 3D-printed models were useful for educating patients and their families on renal surgery, with high reported levels of satisfaction when using 3D-printed models.11 Testicular cancer commonly presents with symptoms that are recognizable by patients; thus, it is important that the general population receives proper education on how to complete a TSE. Despite the potential benefits and relative ease of TSE, it should be noted that its use as a broad screening tool is debated in the current literature. Currently, neither the Canadian Urological Association nor the American Urological Association have published guidelines related to testicular cancer screening and TSE. The United States Preventative Task Force recommends against testicular cancer screening in asymptomatic males due to a lack of evidence demonstrating a benefit and a potential association with increased anxiety related to a falsepositive result.31,32 They do, however, recommend TSE in the context of high-risk individuals, such as those with cryptorchidism or a positive family history.31,32 The Society for Adolescent Health and Medicine recommends TSE, as they state that it identifies several risk factors for testicular cancer.33 It has also been found that TSE is associated with improved education and increased comfort amongst young adult males receiving a genital examination.33 Despite the current discrepancies between various organizations, it is generally agreed that there is a significant proportion of men in the general population that are unaware of the causes and symptoms of testicular cancer. Moreover, the unclear benefit of TSE is likely attributable to a current lack of large population studies assessing the effect of this screening method. The results of this study must be viewed in light of its limitations. The information collected in this study was ascertained from a single institution in a relatively small cohort. The large increase seen in medical learner skill and confidence in performing testicular examinations could be attributed to some external factors. The educational session was the first time that many of the students were exposed to testicular cancer education; thus, preand post-session selfreported outcomes were likely not the best metrics for quantifying educational benefit. As a caveat to this limitation, the second-year medical students participating in the session qualitatively noted that the 3D-printed models would be a useful adjunct to the existing urologic clinical skills curriculum. CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 7 © 2020 Canadian Urological Association Additional research is warranted for clinical validation of the developed 3D-printed models. Future work could include ascertaining feedback from a larger cohort of respondents, including members of the general public and high-risk patients. In order to demonstrate educational utility, it would also be beneficial to incorporate these models into the existing urology curriculum and to assess the associated educational effect using an interventional study design. This could be accomplished by comparing performance on standardized clinical examinations between students with and without exposure to the models as part of their urology curriculum. Similar models replicating other testicular and scrotal pathologies such as epididymal cyst, testicular torsion and hydrocele could also be designed and developed using an iterative design process similar to the methodology outlined in this study. Conclusions This study describes the development and preliminary validation of 3D-printed urological models that may fill existing gaps in patient and medical learner education. 3D-printed models can simulate anatomical structures in a low-cost and effective manner. In the field of urology, this technology presents a unique opportunity to develop and produce educational models that can maintain a high level of fidelity at a low cost. In this study, a set of testicular cancer education models was developed. These models were well-accepted by surveyed urology practitioners and medical learners. The models were also shown to improve both medical learner skill and confidence in performing a testicular examination. The developed 3D-printed models may enable urologists and family physicians to better educate their patients, as well as assist medical learners in developing testicular examination skills. CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 8 © 2020 Canadian Urological Association References 1. Trabert B, Chen J, Devesa SS, Bray F, McGlynn KA. International patterns and trends in testicular cancer incidence, overall and by histologic subtype, 1973-2007. Andrology. 2015;3(1):4-12. 2. Cheng L, Albers P, Berney DM, et al. Testicular cancer. Nat Rev Dis Primers. 2018;4(1):29. 3. Miller KD, Nogueira L, Mariotto AB, et al. Cancer treatment and survivorship statistics, 2019. CA Cancer J Clin. 2019;69(5):363-385. 4. Germà-Lluch JR, Garcia del Muro X, Maroto P, et al. Clinical pattern and therapeutic results achieved in 1490 patients with germ-cell tumours of the testis: the experience of the Spanish Germ-Cell Cancer Group (GG). Eur Urol. 2002;42(6):553-562; discussion 562-563. 5. Wynd CA. Testicular self-examination in young adult men. J Nurs Scholarsh. 2002;34(3):251-255. 6. Khadra A, Oakeshott P. Pilot study of testicular cancer awareness and testicular selfexamination in men attending two South London general practices. Fam Pract. 2002;19(3):294-296. 7. Kaplan AG, Kolla SB, Gamboa AJR, et al. Preliminary evaluation of a genitourinary skills training curriculum for medical students. J Urol. 2009;182(2):668-673. 8. Brenner JS, Hergenroeder AC, Kozinetz CA, Kelder SH. Teaching testicular selfexamination: education and practices in pediatric residents. Pediatrics. 2003;111(3):e239-e244. 9. Aberger M, Wilson B, Holzbeierlein JM, Griebling TL, Nangia AK. Testicular selfexamination and testicular cancer: a cost-utility analysis. Cancer Med. 2014;3(6):16291634. 10. Rovito MJ, Leone JE, Cavayero CT. “Off-Label” Usage of Testicular Self-Examination (TSE): Benefits Beyond Cancer Detection. Am J Mens Health. 2018;12(3):505-513. 11. Lupulescu C, Sun Z. A Systematic Review of the Clinical Value and Applications of Three-Dimensional Printing in Renal Surgery. J Clin Med Res. 2019;8(7). doi:10.3390/jcm8070990 12. Tejo-Otero A, Buj-Corral I, Fenollosa-Artés F. 3D Printing in Medicine for Preoperative Surgical Planning: A Review. Ann Biomed Eng. 2020;48(2):536-555. 13. Shafiee A, Atala A. Printing Technologies for Medical Applications. Trends Mol Med. 2016;22(3):254-265. 14. Zhuang Y-D, Zhou M-C, Liu S-C, Wu J-F, Wang R, Chen C-M. Effectiveness of personalized 3D printed models for patient education in degenerative lumbar disease. Patient Educ Couns. 2019;102(10):1875-1881. 15. Yang T, Tan T, Yang J, et al. The impact of using three-dimensional printed liver models for patient education. J Int Med Res. 2018;46(4):1570-1578. 16. van de Belt TH, Nijmeijer H, Grim D, et al. Patient-Specific Actual-Size ThreeDimensional Printed Models for Patient Education in Glioma Treatment: First Experiences. World Neurosurg. 2018;117:e99-e105. 17. Schmit C, Matsumoto J, Yost K, et al. Impact of a 3D printed model on patients’ CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 9 © 2020 Canadian Urological Association understanding of renal cryoablation: a prospective pilot study. Abdom Radiol (NY). 2019;44(1):304-309. 18. Teishima J, Takayama Y, Iwaguro S, et al. Usefulness of personalized threedimensional printed model on the satisfaction of preoperative education for patients undergoing robot-assisted partial nephrectomy and their families. Int Urol Nephrol. 2018;50(6):1061-1066. 19. Lim KHA, Loo ZY, Goldie SJ, Adams JW, McMenamin PG. Use of 3D printed models in medical education: A randomized control trial comparing 3D prints versus cadaveric materials for learning external cardiac anatomy. Anat Sci Educ. 2016;9(3):213-221. 20. Chen S, Pan Z, Wu Y, et al. The role of three-dimensional printed models of skull in anatomy education: a randomized controlled trail. Sci Rep. 2017;7(1):575. 21. Gillis C, Harvey D, Bishop N, Walsh G, Dubrowski A. Male catheter insertion simulation using a low-fidelity 3D-printed model in undergraduate medical learners. DALHOUSIE MEDICAL JOURNAL. 2019;45(2). doi:10.15273/dmj.vol45no2.8998 22. Ryan JR, Almefty KK, Nakaji P, Frakes DH. Cerebral Aneurysm Clipping Surgery Simulation Using Patient-Specific 3D Printing and Silicone Casting. World Neurosurg. 2016;88:175-181. 23. Taylor JS, Dube CE, Pipas CF, Fuller BK, Lavallee LK, Rosen R. Teaching the testicular exam: a model curriculum from “A” to “Zack.” Fam Med. 2004;36(3):209213. 24. Sarmah PB, Sarmah BD, Ibrahim H, Panting J. Making models to simulate testicular swellings. Clin Teach. 2017;14(6):432-436. 25. Alsyouf M, Stokes P, Hur D, Amasyali A, Ruckle H, Hu B. “Fake News” in urology: evaluating the accuracy of articles shared on social media in genitourinary malignancies. BJU International. 2019;124(4):701-706. doi:10.1111/bju.14787 26. Loeb S, Taylor J, Borin JF, et al. Fake News: Spread of Misinformation about Urological Conditions on Social Media. European Urology Focus. 2019. doi:10.1016/j.euf.2019.11.011 27. Thornton CP. Best Practice in Teaching Male Adolescents and Young Men to Perform Testicular Self-Examinations: A Review. J Pediatr Health Care. 2016;30(6):518-527. 28. Yeo S, Eigl B, Ingledew P-A. A fountain of knowledge? The quality of online resources for testicular cancer patients. Can Urol Assoc J. March 2020. doi:10.5489/cuaj.6154 29. Wake N, Rosenkrantz AB, Huang R, et al. Patient-specific 3D printed and augmented reality kidney and prostate cancer models: impact on patient education. 3D Print Med. 2019;5(1):4. 30. Bernhard J-C, Isotani S, Matsugasumi T, et al. Personalized 3D printed model of kidney and tumor anatomy: a useful tool for patient education. World J Urol. 2016;34(3):337345. 31. U.S. Preventive Services Task Force. Screening for testicular cancer: U.S. Preventive Services Task Force reaffirmation recommendation statement. Ann Intern Med. 2011;154(7):483-486. 32. Akar SZ, Bebiş H. Evaluation of the effectiveness of testicular cancer and testicular self-examination training for patient care personnel: intervention study. Health Educ Res. 2014;29(6):966-976. CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 10 © 2020 Canadian Urological Association 33. The Male Genital Examination: A Position Paper of the Society for Adolescent Health and Medicine. Journal of Adolescent Health. 2012;50(4):424-425. doi:10.1016/j.jadohealth.2012.01.002 CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 11 © 2020 Canadian Urological Association Figures and Tables Fig. 1. Iterative design process used for the design and completion of the 3D-printed testicular cancer models. *Feedback sessions were held with urologists, nurse practitioners and medical students that were not a part of the research team. Fig. 2. Photographs of series of 3Dprinted testicular cancer models. (A) Represents all five of the series of testicular cancer models. Scrotums depicted contain either, two healthy testicles, or a healthy testicle and a testicle containing an endophytic lesion of varying size. (B) Represents one of the testicular cancer models with stand for easy palpation. (C) Represents the testes from the testicular cancer model. Malignancies are endophytic and cannot be distinguished visually. CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 12 © 2020 Canadian Urological Association Table 1. Results of model evaluation of testicular cancer screening models by urologists and urology nurse practitioners Questions on model evaluation survey n Mean ± SD % agree* % neutral % disagree† This model is anatomically accurate 8 4.5±0.5 100 0 0 On palpation, the testicle with no mass feels like an accurate representation of a healthy testicle 8 4.6±0.5 100 0 0 On palpation, the simulated testicle pathology feels like an accurate representation of pathology requiring further investigation 8 4.4±0.5 100 0 0 This model allows for an accurate simulation of a testicular exam 7 4.3±1.0 71.4 28.5 0 This model would be useful teaching tool for patients who are learning testicular selfexamination 8 4.8±0.7 87.5 12.5 0 This model would be a useful teaching tool for medical learners who are learning testicular examination 8 4.8±0.5 100 0 0 This model is an improvement over existing models for testicular cancer 7 4.6±0.5 100 0 0 *Percentage of responses as either agree (4) or strongly agree (5). †Percentage of response as either disagree (2) or strongly disagree (1). SD: standard deviation. CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 13 © 2020 Canadian Urological Association Table 2. Results of model evaluation of testicular cancer screening models by firstand second-year medical students Questions on model evaluation survey n Mean ± SD % agree* % neutral % disagree† At the beginning of the session, I possessed the skills to perform a testicular examination 32 1.8±0.9 6.3 12.5 81.3 At the end of the session, I possessed the skills to perform a testicular examination 32 4.2±0.4 100 0 0 At the beginning of the session, I felt confident in performing a testicular examination 32 1.6±0.9 6.3 6.3 87.5 At the end of the session, I felt confident in performing a testicular examination 32 3.9±0.4 84.4 15.6 0 This model is anatomically accurate 31 4.4±0.6 96.8 3.2 0 This model allows for an accurate simulation of a testicular exam 30 4.3±0.5 96.7 3.3 0 This model would be a useful teaching tool for patients who are learning testicular selfexamination 32 4.7±0.5 100 0 0 This model would be a useful teaching tool for medical learners who are learning testicular examination 32 4.8±0.4 100 0 0 CUAJ – Original Research Power et al 3D-printed model to teach testicular exams 14 © 2020 Canadian Urological Association This model would be a useful addition to the existing urology curriculum 30 4.7±0.4 100 0 0 *Percentage of responses as either agree (4) or strongly agree (5). †Percentage of response as either disagree (2) or strongly disagree (1). SD: standard deviation. Introduction Testicular cancer is the most commonly diagnosed malignancy in men aged 15-44 years living in North America. 1 Testicular cancer is highly amenable to treatment when caught at an early stage, 2 with a five-year survival approaching 100% in patients diagnosed at stage I. 3 In contrast, patients who miss the early window and present with a stage IV metastatic diagnosis have a fiveyear survival of 74%. 3 Early detection and management of cancerous testicular lesions is thus critical in optimizing patient outcomes. As testicular cancer often presents as a painless mass in the scrotum and/or testicular swelling, 4-6 testicular examination is a non-invasive and simple test to identify such pathologic findings at an early stage. However, past research has shown that only 5% of medical students beginning their clinical clerkship training, 7 as well as only 36% of pediatrics residents, 8 identify as comfortable with their testicular examination skills. Accumulating evidence also suggests potential benefits of testicular self-examination (TSE) as a screening test given its privacy and convenience; 9,10 however, previous surveys amongst male college students report that only 41% are taught TSE and only 8% perform the examination with regularity. Such results highlight an educational gap amongst both medical learners and patients, as well as a need for novel tools that will facilitate such education. One technology that enables the development of high-fidelity educational tools is 3D printing. This technology has the distinct advantage of demonstrating spatial relations between anatomically-accurate structures, making it well-suited for use in both medical training and patient education. [11][12][13] The use of 3D-printed models as an adjunct to existing techniques has been found to be preferred by patients, as well as beneficial in improving their understanding of relevant anatomy. [14][15][16][17][18] Similar advantages have been demonstrated in medical education, where 3D-printed models have been effective in improving anatomy and clinical skills curricula. [19][20][21][22] Despite the reported accuracy and educational benefits of 3D-printed anatomical models, no previous study (to the authors' knowledge) has investigated 3D printing as a means of improving testicular examination education. To this end, the presented study describes the development of a set of 3D-printed models designed for the purposes of teaching testicular examination and improving understanding of testicular malignancies amongst patients and medical learners. Methods A multidisciplinary team comprising urologists, engineers, and medical students collaborated to conceptualize a set of 3D-printed testicular cancer models. Five models were designed, with each simulated scrotum containing either a) two healthy testicles, or b) one healthy testicle and one testicle with an endophytic lesion of varying size. The endophytic lesions would be placed at varying locations within the pathologic testicle and would range in size from 5% to 60% of the total size of the testicle. Moreover, larger testicle models were used for bigger masses, simulating the testicular swelling that is often concurrent with testicular lesions. An initial prototype was designed and developed to simulate a normal human scrotum, testes, and epididymides. An iterative design process between the design team and urologists allowed for extensive refinement of the models in terms of anatomical accuracy. The urologists were presented with three printing materials -Dragon Skin 10 NV, Dragon Skin Fx-Pro, and Smooth-On Ecoflex 00-30 -and asked to select the printing materials that they felt best simulated the texture and consistency of a scrotum, testicle, and epididymis. The design team used the urologist feedback to further optimize the developed model, and this process was repeated until the urologists agreed that the model accurately simulated a human scrotum both on inspection and palpation ( Figure 1). Next, a similar process was used to develop a series of models containing a cancerous testicle. Testicular masses were simulated by first casting an irregular shape in a denser material than that used for the testicle. This irregular mass was then suspended in the mold used for the testicle during the casting process. This enabled an irregular mass to be placed within the testicle and to be appreciated on palpation. The urologists were presented with individual samples casted in thermoplastic elastomer and polylactic acid polymer and were asked to select which material best simulated a testicular malignancy. The size, texture, and location of the masses were also guided by frequent input from the urologists. An iterative process was repeated until the urologists were satisfied that the models accurately simulated palpation of a cancerous testicle ( Figure 1). All aspects of the model were designed in OpenJSCAD and finalized using MeshMixer. All models were casted using a Prusa MKS3S printer. Once the set of testicular cancer models had been developed, two separate sessions were held to ascertain feedback from both clinicians and medical learners. In the first session, a group of urologists and urology nurse practitioners was asked to visually inspect and palpate the developed models. In a second session, first-and second-year medical students were provided with a brief tutorial on testicular examination by a staff urologist. Of the medical learners, the first-year medical students had not yet received formal urological education as part of their undergraduate medical curriculum, while the second-year medical students had received one hour of didactic teaching on testicular cancer and two hours on urologic examination skills. These learners were then asked to practice their examination skills using the developed models. After using each model, participants in both sessions were asked to complete a survey in which they were asked to rate their agreement with several statements using a five-point Likert scale. Items on the clinician survey primarily focused on the anatomical accuracy of the developed models, the usefulness of the models in simulating a testicular examination, and the overall applicability of the models as teaching tools. The survey for medical learners featured additional items relating to the pre-and post-session levels of both skill and confidence in performing a testicular examination. Clinicians and medical learners were also asked to select, from a list of five potential applications (e.g. medical student training, resident training, nurse practitioner training, family physician training, patient education), all of the purposes for which CUAJ -Original Research Power et al 3D-printed model to teach testicular exams 4 © 2020 Canadian Urological Association they felt the models would be beneficial. Lastly, both surveys included a section for participants to provide qualitative feedback relating to areas for improvement and the potential clinical applicability of the developed models. Results Through the multidisciplinary design process, it was determined that Smooth-On Ecoflex 00-30 silicone was the most anatomical representative material for the model's scrotum and testicle texture. Similar evaluation of materials for the testicular pathologies concluded that polylactic acid polymer was the most anatomically representative material ( Figure 2). A total of six urologists and two urology nurse practitioners participated in the study. Responses from the surveyed urologists and urology nurse practitioners are summarized in Table 1. All surveyed participants agreed that the developed models would be useful teaching tools for both medical learners and patients. They also agreed that these models would be beneficial for resident, nurse practitioner, and family physician training. Respondents felt that the models enabled accurate simulation of a testicular examination. All participants agreed that the models effectively simulated palpation of healthy and pathologic testes. Qualitative feedback concluded that the majority would use these models as a teaching aid for both patients and medical learners. In addition, all agreed that they would incorporate these models in some way into their practice. In terms of areas for improvement, respondents suggested that the epididymis could be more prominent to more accurately simulate palpation. These recommendations were used to further optimize the anatomical accuracy of the testicular cancer model (Figure 1, Figure 2. C). Survey data were collected from 32 medical learners. Of the 32 learners, 26 were firstyear medical students and 6 were second-year medical students. The results of the medical learner survey are summarized in Table 2. The proportion of medical learners identifying as confident in performing a testicular examination increased from a pre-session value of 6.3% to 84.4% following the session (Table 2). A similar effect was demonstrated in the proportion of learners claiming that they possessed the skills to perform a testicular examination, which increased from 6.3 % to 100% ( Table 2). The majority of medical learners felt that the use of the testicular models would be helpful in the current medical school curriculum. In addition, most medical learners believed that these models would be beneficial for medical student training (96.9%) and patient education (90.6%). The majority of medical learners stated that these models would be useful for resident training (75.0%), nurse practitioner training (84.4%), and family physician training (75.0%). Medical learner qualitative feedback concluded that the testicular models were helpful in identifying a testicular mass on palpation, practising palpation technique, and differentiating between a pathologic and healthy testicle. Discussion The presented 3D-printed testicular models have the potential to improve testicular cancer education for medical learners and patients. From the survey of medical learners, the models CUAJ -Original Research Power et al 3D-printed model to teach testicular exams 5 © 2020 Canadian Urological Association were shown to increase both skill and confidence relating to the performance of a testicular examination. Both medical learners and clinicians agreed that these models would be a beneficial addition to the existing medical learner urology curriculum. Learning to perform a genitourinary examination on actual patients can be uncomfortable for both the patient and the learner. Alternative teaching modalities can potentially relieve anxiety associated with the standard apprenticeship method of teaching urologic examination skills, as well as create a safe environment for learners to refine their skills. 7,23,24 A study by Kaplan et al. stated that, after medical students were given an intensive examination skills course using a standardized patient, 90.3% of students reported being significantly more comfortable with performing a testicular examination. These medical students reported that learning a testicular examination was one of the most useful urologic skills to learn in a standardized environment. 7 Medical learners have also previously reported a preference for practising examination skills on male anatomical models rather than standardized patients. 23 In fact, one study found crude handmade models simulating testicular swelling pathologies to be beneficial in teaching urologic examination skills to medical students. 24 As using a standardized patient can be costly and logistically burdensome, simulated education tools offer a convenient alternative that may improve testicular examination education. 23 The survey data also suggested that the developed models would be beneficial in educating patients about TSE and testicular malignancies in general. It has been previously reported that there is a significant lack of education in the general public surrounding testicular cancer and TSE. [25][26][27] Misinformation relating to urologic cancers can be easily spread through social media. 25,26 A recent publication by Yeo et al. suggested that, with the increase in patients acquiring information about testicular cancer from sources that are not validated or credible, direct patient education has become even more important. 28 Directing patients towards validated testicular cancer education programs is beneficial for patient-physician communication, as well as overall patient understanding. The testicular cancer model described in this study can serve as an adjunct for patient education and facilitate discussion regarding the benefits of screening and the risk of testicular malignancy. Currently, the number of testicular cancer models available on the market is limited. The available models are expensive and fail to accurately show disease progression. From a review of the current products available on the market, a progressive set of five testicular cancer models would cost approximately $875 Canadian Dollars (CAD), while a male pelvic trainer would cost approximately $3700 CAD. The use of 3D printing offers a unique solution to the high cost of existing models. The material cost of the five progressive testicular cancer models presented in this study was approximately $13 CAD ($10 of Smooth-On Ecoflex 00-30 silicone and $3 CAD of polylactic acid polymer). That is a direct cost savings of $862 CAD. In addition, many universities and public libraries now offer 3D printing services that could be used to print the developed models at a low cost. Accordingly, 3D-printed technology allows for anatomically CUAJ -Original Research Power et al 3D-printed model to teach testicular exams 6 © 2020 Canadian Urological Association accurate models to be made at a fraction of the cost of existing models; thus, mitigating the financial barriers currently associated with quality testicular cancer education. Using 3D-printed models as an adjunct to existing techniques is rated as a highly valuable learning tool for patients, as well as beneficial in improving their understanding of relevant anatomy and surgical complications. [14][15][16][17][18] In addition, truly patient-informed decision-making is predicated on a basic understanding of pathology and anatomy, a process that can be facilitated with various patient education aids. This technology has been shown to be a successful means of educating patients in urology. 17,29,30 In a recent systematic review performed by Lupulescu and Sun, 27 studies analyzed the use of 3D-printed technologies for preoperative surgical planning and patient education in renal surgery. This review found that patient-specific 3D-printed models were useful for educating patients and their families on renal surgery, with high reported levels of satisfaction when using 3D-printed models. 11 Testicular cancer commonly presents with symptoms that are recognizable by patients; thus, it is important that the general population receives proper education on how to complete a TSE. Despite the potential benefits and relative ease of TSE, it should be noted that its use as a broad screening tool is debated in the current literature. Currently, neither the Canadian Urological Association nor the American Urological Association have published guidelines related to testicular cancer screening and TSE. The United States Preventative Task Force recommends against testicular cancer screening in asymptomatic males due to a lack of evidence demonstrating a benefit and a potential association with increased anxiety related to a falsepositive result. 31,32 They do, however, recommend TSE in the context of high-risk individuals, such as those with cryptorchidism or a positive family history. 31,32 The Society for Adolescent Health and Medicine recommends TSE, as they state that it identifies several risk factors for testicular cancer. 33 It has also been found that TSE is associated with improved education and increased comfort amongst young adult males receiving a genital examination. 33 Despite the current discrepancies between various organizations, it is generally agreed that there is a significant proportion of men in the general population that are unaware of the causes and symptoms of testicular cancer. Moreover, the unclear benefit of TSE is likely attributable to a current lack of large population studies assessing the effect of this screening method. The results of this study must be viewed in light of its limitations. The information collected in this study was ascertained from a single institution in a relatively small cohort. The large increase seen in medical learner skill and confidence in performing testicular examinations could be attributed to some external factors. The educational session was the first time that many of the students were exposed to testicular cancer education; thus, pre-and post-session selfreported outcomes were likely not the best metrics for quantifying educational benefit. As a caveat to this limitation, the second-year medical students participating in the session qualitatively noted that the 3D-printed models would be a useful adjunct to the existing urologic clinical skills curriculum. CUAJ -Original Research Power et al 3D-printed model to teach testicular exams 7 © 2020 Canadian Urological Association Additional research is warranted for clinical validation of the developed 3D-printed models. Future work could include ascertaining feedback from a larger cohort of respondents, including members of the general public and high-risk patients. In order to demonstrate educational utility, it would also be beneficial to incorporate these models into the existing urology curriculum and to assess the associated educational effect using an interventional study design. This could be accomplished by comparing performance on standardized clinical examinations between students with and without exposure to the models as part of their urology curriculum. Similar models replicating other testicular and scrotal pathologies such as epididymal cyst, testicular torsion and hydrocele could also be designed and developed using an iterative design process similar to the methodology outlined in this study. Conclusions This study describes the development and preliminary validation of 3D-printed urological models that may fill existing gaps in patient and medical learner education. 3D-printed models can simulate anatomical structures in a low-cost and effective manner. In the field of urology, this technology presents a unique opportunity to develop and produce educational models that can maintain a high level of fidelity at a low cost. In this study, a set of testicular cancer education models was developed. These models were well-accepted by surveyed urology practitioners and medical learners. The models were also shown to improve both medical learner skill and confidence in performing a testicular examination. The developed 3D-printed models may enable urologists and family physicians to better educate their patients, as well as assist medical learners in developing testicular examination skills. 3D-printed model to teach testicular exams 11 © 2020 Canadian Urological Association Fig. 1. Iterative design process used for the design and completion of the 3D-printed testicular cancer models. * Feedback sessions were held with urologists, nurse practitioners and medical students that were not a part of the research team. (4) or strongly agree (5). † Percentage of response as either disagree (2) or strongly disagree (1). SD: standard deviation.
2020-10-06T13:36:14.564Z
2020-09-28T00:00:00.000
{ "year": 2020, "sha1": "c8b4d9768abb6060bd5b64da904c40254bb195d1", "oa_license": null, "oa_url": "https://cuaj.ca/index.php/journal/article/download/6675/4619", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "51fbdfa2b4240344d33116eb3f79fd13b0379d01", "s2fieldsofstudy": [ "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
257608029
pes2o/s2orc
v3-fos-license
Recent State and Challenges in Spectroelectrochemistry with Its Applications in Microfluidics This review paper presents the recent developments in spectroelectrochemical (SEC) technologies. The coupling of spectroscopy and electrochemistry enables SEC to do a detailed and comprehensive study of the electron transfer kinetics and vibrational spectroscopic fingerprint of analytes during electrochemical reactions. Though SEC is a promising technique, the usage of SEC techniques is still limited. Therefore, enough publicity for SEC is required, considering the promising potential in the analysis fields. Unlike previously published review papers primarily focused on the relatively frequently used SEC techniques (ultraviolet-visible SEC and surface-enhanced Raman spectroscopy SEC), the two not-frequently used but promising techniques (nuclear magnetic resonance SEC and dark-field microscopy SEC) have also been studied in detail. This review paper not only focuses on the applications of each SEC method but also details their primary working mechanism. In short, this paper summarizes each SEC technique’s working principles, current applications, challenges encountered, and future development directions. In addition, each SEC technique’s applicative research directions are detailed and compared in this review work. Furthermore, integrating SEC techniques into microfluidics is becoming a trend in minimized analysis devices. Therefore, the usage of SEC techniques in microfluidics is discussed. Introduction Since the coupling of spectroscopy and electrochemistry (hereafter spectroelectrochemistry (SEC)) can provide a detailed and comprehensive study of the electron transfer kinetics and analytes' structural information during the electrochemical process. SEC is attracting intensive interest for various research in analytical fields, ranging from biology [1] to chemistry [2], material engineering [3,4], and others [5]. The schematic diagrams of the SEC technique are shown in Figure 1a. Electrochemical techniques, such as cyclic voltammetry (CV), differential pulse voltammetry (DPV), or electrochemical impedance spectroscopy (EIS), have been employed in SEC techniques [6,7]. Similarly, ultraviolet-visible (UV-Vis), Raman/surface-enhanced Raman spectroscopy (SERS), and nuclear magnetic resonance (NMR) are frequently used spectroscopy techniques. Therefore, SEC techniques have incredible versatility because multiple electrochemical methods are available and different spectral regions can be analyzed depending on the system under study, and the desired information to be obtained [8][9][10]. SEC techniques have been used to determine small structural changes and tiny luminescent responses [6,11]. Some of the examples include comprehending the electron transfer kinetics between the electrode and different electrolyte matrices [12], mass transport [13], and redox events for analytes and nanoparticles (NPs) [13,14]. Ultraviolet-Visible SEC (UV-Vis SEC) As stated above, the most commonly used SEC setups are UV-Vis SEC and Ram SEC [24]. UV-Vis SEC is a powerful hybrid technology that allows the researcher to ob electrochemical and spectroscopic responses simultaneously. UV-Vis SEC, the oldest technique, was introduced in 1964 by Kuwana [25]. In this original work, a tin ox coated optically transparent electrode (OTE) platform was used as a WE to underst the mechanism of the oxidation process of o-toluidine (C14H16N2) and the absorption tures of the electrooxidation products. Subsequently, the UV-Vis SEC approach was u for phenazine detection and exploration of its redox characteristics [26]. Pavel et al. u this technique for the rapid determination of the optical and redox properties of [Zn2(NDC)2(DPNI)] 0/−/2−/ metal-organic framework [27]. The SEC "family" is continually expanding, including techniques such as dark-field microscopy SEC (DFM SEC), and nuclear magnetic resonance SEC (NMR SEC)) [2,6,11,[14][15][16]. The past few decades have witnessed different SEC-combined techniques appearing in various analytical fields. Figure 1b shows articles on the relevant techniques in the past four years. However, even for the two relatively mature combinations of UV-Vis SEC and Raman SEC, relevant published work still needs to be expanded, not to mention the use of NMR SEC and DFM SEC. The development of SEC techniques is still a severe challenge due to the lack of widespread publicity. For this purpose, the recent past few years' developments in SEC techniques, namely, UV-Vis SEC, Raman SEC, NMR SEC, and DFM SEC, are discussed in this review work. Among the SEC techniques, UV-Vis SEC and Raman SEC are the two most widely used SEC technologies; therefore, it is necessary to know their latest research trends [17][18][19][20][21][22][23]. As limited papers are available on NMR SEC and DFM SEC, this has further limited these two promising SEC technologies' adoptions. A discussion of these two techniques is essential. Herein, the present review paper is structured by discussing each technique's basic working principle, and then the current state of the art in the field. Recently, the application of SEC technology in microfluidics has received increasing attention. Therefore, the last section of this article presents the development of combined SEC and microfluidics technology. For SEC techniques, the advantages/disadvantages of them for analysis applications and their future development directions/perspectives are discussed in the summary and outlook sections. Through this review work, we hope more people, whether established researchers or beginners, will be able to see, understand, and use these SEC technologies in their respective research fields. Ultraviolet-Visible SEC (UV-Vis SEC) As stated above, the most commonly used SEC setups are UV-Vis SEC and Raman SEC [24]. UV-Vis SEC is a powerful hybrid technology that allows the researcher to obtain electrochemical and spectroscopic responses simultaneously. UV-Vis SEC, the oldest SEC technique, was introduced in 1964 by Kuwana [25]. In this original work, a tin oxidecoated optically transparent electrode (OTE) platform was used as a WE to understand the mechanism of the oxidation process of o-toluidine (C 14 H 16 N 2 ) and the absorption features of the electrooxidation products. Subsequently, the UV-Vis SEC approach was used for phenazine detection and exploration of its redox characteristics [26]. Pavel et al. used this technique for the rapid determination of the optical and redox properties of the [Zn 2 (NDC) 2 (DPNI)] 0/−/2−/ metal-organic framework [27]. Based on the arrangement of the light source, the UV-Vis SEC can be roughly divided into two categories, that is, the normal transmission arrangement (Figure 2a) and parallel transmission arrangement (Figure 2b). When the light beam travels perpendicular to the WE surface in the normal configuration, it collects information about the solution and the WE. However, only the solution is sampled when the light beam is parallel to the WE (parallel transmission arrangement) [28]. For the normal transmission arrangement, since the light source needs to penetrate though the analyte solution and the WE, therefore, in this case, OTEs are fundamental for the success of the UV-Vis SEC and a topic of intense research. This drastically reduces the number of WE that can be used. On the other hand, a perfect but difficult alignment of the light beams is required in the parallel configuration. However, in practice, this means that many different parts must be assembled to carry out a single experiment [19]. Micromachines 2023, 14, × FOR PEER REVIEW 3 Based on the arrangement of the light source, the UV-Vis SEC can be roughly div into two categories, that is, the normal transmission arrangement ( Figure 2a) and pa transmission arrangement (Figure 2b). When the light beam travels perpendicular t WE surface in the normal configuration, it collects information about the solution an WE. However, only the solution is sampled when the light beam is parallel to the (parallel transmission arrangement) [28]. For the normal transmission arrangement, the light source needs to penetrate though the analyte solution and the WE, therefo this case, OTEs are fundamental for the success of the UV-Vis SEC and a topic of int research. This drastically reduces the number of WE that can be used. On the other h a perfect but difficult alignment of the light beams is required in the parallel configura However, in practice, this means that many different parts must be assembled to carr a single experiment [19]. OTEs in UV-Vis SEC OTEs are used for a broad range of applications stemming from the fundam investigation of electron transfer mechanisms to mature applied daily applications, e cially in the field of photovoltaics and thin film transistors (TFT) [29][30][31]. At presen commonly used OTEs are metal oxide films (e.g., indium tin oxide (ITO), fluorine-d tin oxide (FTO)); thin metal films (e.g., gold (Au), platinum (Pt)); and carbon-based O (e.g., graphene, carbon nanotube, glassy carbon), deposited on borosilicate or quartz substrates. Table 1 summarizes the employment of different OTEs in recent applica of UV-Vis SEC. The most recent studies have concentrated on using carbon-based OTEs [8,32 Carbon-based OTEs offer compelling advantages over traditional metal or metal o OTEs. These have easy accessibility, excellent chemical inertness, high electrical con tivity, a wide electrochemical potential window, versatile preparation methods, and simplicity of surface modifications [35,36]. However, carbon-based OTEs also suffer some disadvantages such as (i) weak adhesion between the substrate and carbon n materials [37][38][39], (ii) low production capacity [40], and (iii) their surfaces may poss variety of functional groups since the nature of their surfaces can affect their electroch ical performance [41][42][43][44]. OTEs in UV-Vis SEC OTEs are used for a broad range of applications stemming from the fundamental investigation of electron transfer mechanisms to mature applied daily applications, especially in the field of photovoltaics and thin film transistors (TFT) [29][30][31]. At present, the commonly used OTEs are metal oxide films (e.g., indium tin oxide (ITO), fluorine-doped tin oxide (FTO)); thin metal films (e.g., gold (Au), platinum (Pt)); and carbon-based OTEs (e.g., graphene, carbon nanotube, glassy carbon), deposited on borosilicate or quartz glass substrates. Table 1 summarizes the employment of different OTEs in recent applications of UV-Vis SEC. The most recent studies have concentrated on using carbon-based OTEs [8,[32][33][34]. Carbon-based OTEs offer compelling advantages over traditional metal or metal oxide OTEs. These have easy accessibility, excellent chemical inertness, high electrical conductivity, a wide electrochemical potential window, versatile preparation methods, and the simplicity of surface modifications [35,36]. However, carbon-based OTEs also suffer from some disadvantages such as (i) weak adhesion between the substrate and carbon nanomaterials [37][38][39], (ii) low production capacity [40], and (iii) their surfaces may possess a variety of functional groups since the nature of their surfaces can affect their electrochemical performance [41][42][43][44]. • ITO film has been one of the most used metal oxides OTEs. The thin film layer is typically sputter-coated on a glass substrate [49]. However, because of the less negative inert potential window (i.e., −0.45 to +1.92V vs. reversible hydrogen electrode (RHE) in 0.1M NaOH), this material can only be used in a relatively small range of potentials [50]. In addition, further widespread use may be limited by the cost, brittleness, and scarcity of indium [51]. FTO is another representative metal oxide OTE. Like ITO, FTO also faces the problem of a less negative inert potential window (i.e., −0.51 to 1.73V vs. RHE in 0.1 M NaOH) [50]. • Thin metal films have been widely reported and proven. Especially, sputtered Au film is an appropriate candidate for the thin film electrode material because of its good conductivity, enough transparency, and very low chemical reactivity. Additionally, the electrochemical behavior of Au has been studied extensively [52,53], and thus it may be easier to predict and understand the behavior of an Au electrode. However, Au may not be an ideal electrode material for electrochemical studies that require high potentials due to the corresponding Au oxidation [50]. • Recently, the most reported studies concentrated on using carbon-based OTEs, as stated in Table 1. Carbon-based OTEs offer several advantages over traditional metal and metal oxide OTEs, including easy accessibility, excellent chemical inertness, high electrical conductivity, a wide electrochemical potential window, versatile preparation methods, and the simplicity of surface modifications [35,36]. Unlike ITO and FTO, which have a rapid decrease in the transparency for wavelengths shorter than~350 nm, carbon-based OTEs can exhibit enough optical transparency over a broader frequency range [35,54]. Although there are also many problems in carbon-based OTEs, such as (i) weak adhesion between the substrate and carbon nanomaterials; (ii) problems related to massive production [40]; and (iii) surface-preparation-dependent-electrochemical performance, the use of carbon-based OTEs is becoming more and more popular. Applications of UV-Vis SEC It is well-known that UV-Vis SEC has been applied in many fields, for example, in electron transfer processes [55], solar cells [31,56], memory devices [57], and the determination of compounds of biological interest [10,[58][59][60]. A detailed summary of the chosen examples' electrode configurations and light arrangements is given in Table 2. As presented in Figure 3a, an in situ UV-Vis SEC technique interrogating a three-electrode configuration was demonstrated by Jesus et al. to direct the determination of ascorbic acid (AA) in a grapefruit [10]. In this case, the three-electrode cell (a working electrode (WE), a counter electrode (CE) of SWCNTs, and a silver/silver chloride (Ag/AgCl) reference electrode (RE)) was directly placed inside the grapefruit without any pretreatment. Using the electrochemical method (oxidation of AA at +0.90 V), the concentration of AA was found to be [1.99 ± 0.14] × 10 −3 M. While using the spectroscopic method (UV-Vis), they found the concentration to be [2.06 ± 0.11] × 10 −3 M. One interesting point of this work that needs elaboration is the preparation of the SWCNT electrodes. First, the SWCNT dispersion was filtered, and subsequently, the SWCNT film was press-transferred on a polyethylene terephthalate (PET) sheet using a stencil with a custom design. As a result, the excellent reproducibility of the SWCNT electrodes was demonstrated. Herein, the light source was paralleled with respect to the WE surface, and the first 100 µm of the solution adjacent to the SWCNT WE surface was recorded for further optical analysis. Electrochemical preconcentration is one of the most frequently discussed preconcentration techniques at a controlled potential [61][62][63][64]. Hybrid SEC techniques also often involve this method to assist the goal of quantifying ultra-trace aqueous target analytes [47,65]. Such as this latest published paper by Arash et al., in this work, the monitor of a potassiumchannel blocking agent, ampyra (AMP), was studied. As shown in Figure 3(b1), a glass cuvette equipped with an FTO transparent WE, a Pt plate (10 × 20 mm 2 ) CE, and an Ag/AgCl RE was employed in this work. The light beam transmitted through the WE and arrived at the diode array detector (~1 cm light path). The information on the light intensity located at 320 nm (related to AMP) was recorded and shown in Figure 3(b2-b4). The AgNP decorated FTO WE can show an exceptionally low detection limit of~5.77 µmol/L AMP, which is satisfactory for quantifying the AMP in commercial tablets. the concentration to be [2.06 ± 0.11] × 10 −3 M. One interesting point of this work that needs elaboration is the preparation of the SWCNT electrodes. First, the SWCNT dispersion was filtered, and subsequently, the SWCNT film was press-transferred on a polyethylene terephthalate (PET) sheet using a stencil with a custom design. As a result, the excellent reproducibility of the SWCNT electrodes was demonstrated. Herein, the light source was paralleled with respect to the WE surface, and the first 100 µm of the solution adjacent to the SWCNT WE surface was recorded for further optical analysis. Surface-Enhanced Raman Spectroscopy SEC (SERS SEC) As one of the two most used SEC setups, Raman SEC has been widely used in various research fields [24,75,76]. It is well known that Raman spectroscopy is a powerful technique widely used to study the material structure because of its convenience, low price, and nondestructive characteristics [77]. However, Raman scattering is an inelastic scattering process with an external cross-section. Hence, Raman has limited sensitivity and consequently constrained analysis efficiency and applicability [5]. Nanostructure-Defined SERS-Active Substrates Surface-enhanced Raman spectroscopy (SERS) was introduced in 1974 by Fleishman et al. to solve the above problems [78]. The SERS enhancement factor of the Raman signal can be as high as 10 15 [79]. In SERS, the Raman substrate is a rough or nanostructured noble metal surface [67][68][69][70][71]80,81]. Under proper incident light, this metal surface will give rise to enhanced local electromagnetic fields via the localized surface plasmon resonance effect [5]. Up to now, considerable work has been done on the design of ideal SERS-active substrates [82][83][84]. SERS-active substrates in SEC setups include electrodes roughened by the oxidation-reduction cycle [24], metal island films [85], colloidal NPs, and surfaceconfined nanostructures [86][87][88]. There are many types of different SERS-active substrates, either as structural motifs or as SERS materials, as shown in Figure 4. Surface-Enhanced Raman Spectroscopy SEC (SERS SEC) As one of the two most used SEC setups, Raman SEC has been widely used in various research fields [24,75,76]. It is well known that Raman spectroscopy is a powerful technique widely used to study the material structure because of its convenience, low price, and non-destructive characteristics [77]. However, Raman scattering is an inelastic scattering process with an external cross-section. Hence, Raman has limited sensitivity and consequently constrained analysis efficiency and applicability [5]. Nanostructure-Defined SERS-Active Substrates Surface-enhanced Raman spectroscopy (SERS) was introduced in 1974 by Fleishman et al. to solve the above problems [78]. The SERS enhancement factor of the Raman signal can be as high as 10 15 [79]. In SERS, the Raman substrate is a rough or nanostructured noble metal surface [67][68][69][70][71]80,81]. Under proper incident light, this metal surface will give rise to enhanced local electromagnetic fields via the localized surface plasmon resonance effect [5]. Up to now, considerable work has been done on the design of ideal SERS-active substrates [82][83][84]. SERS-active substrates in SEC setups include electrodes roughened by the oxidation-reduction cycle [24], metal island films [85], colloidal NPs, and surface-confined nanostructures [86][87][88]. There are many types of different SERS-active substrates, either as structural motifs or as SERS materials, as shown in Figure 4. Applications of SERS SEC The combination of SERS with electrochemistry has emerged as a powerful tool to monitor the structural changes of surface adsorbates [95,96], reaction intermediates [97,98], and quantitative analysis of electrolysis products [24,60]. Daniel and co-workers showed that the SERS SEC technique achieved a theoretical ferricyanide detection limit (~1.5 × 10 −8 M) in a 0.1 M KCl solution [24]. This was well below the limits of traditional electrochemical measurements (1 × 10 −4 M). In this work, one commercially available silver screen-printed electrode (SPE) was applied, and the rough WE surface decorated with AgNPs was obtained via the in situ electrochemical activation strategy. More details about Applications of SERS SEC The combination of SERS with electrochemistry has emerged as a powerful tool to monitor the structural changes of surface adsorbates [95,96], reaction intermediates [97,98], and quantitative analysis of electrolysis products [24,60]. Daniel and co-workers showed that the SERS SEC technique achieved a theoretical ferricyanide detection limit (~1.5 × 10 −8 M) in a 0.1 M KCl solution [24]. This was well below the limits of traditional electrochemical measurements (1 × 10 −4 M). In this work, one commercially available silver screen-printed electrode (SPE) was applied, and the rough WE surface decorated with AgNPs was obtained via the in situ electrochemical activation strategy. More details about the SERS SEC setup are summarized in Table 2. Daniel et al. mainly conducted two experiments using the SERS SEC setup [2,24]. (i) To record the time-resolved SERS SEC of the Ferri/ferrocyanide electrochemical process. For this, they did a cyclic voltammetry (CV) experiment between +0.5 to −0.4 V at 0.05 V/s and recorded the Raman spectra every 1s. This experiment showed the correlation between the Raman response and the electrochemical transformation of the redox couple. This was mainly done to demonstrate the performance of the SERS SEC instrument. (ii) The use of in situ electrochemically activated Ag SPEs for the detection of ferricyanide and [Ru(bpy) 3 ] 2+ . In this experiment, the electrochemical process also activated the Ag SERS-active substrate. They detected ferricyanide in concentrations as low as 1.5 × 10 −8 M in a 0.1 M KCl solution and [Ru(bpy) 3 ] 2+ as low as 2.1 × 10 −8 M in a 0.1 M KCl solution. This result demonstrated the potential of SERS SEC for the sensitive, precise, and rapid detection of different analytes. However, using this typical oxidation-reduction cycles method to fabricate a SERS-active substrate often has the problem of inconsistent surface roughness and results in low reproducibility issues [20]. For SERS-based analytical transducers, reusability is critical for decreasing the variation between measurements and the manufacturing time. Marlitt et al. proposed one possible way to prepare a standard/practical analytical tool with excellent reusability [17]. The structure of the SERS SEC setup is shown in Figure 5(a1-a3). In this setup, the forest of Au-capped silicon nanopillars was applied as the SERS-active substrate and the WE. The detection of the toxic compound melamine was carried out. Instead of using harsh reagents [17], UV irradiation [99], or high-temperature treatment [100], the active substrates' recycling was obtained by applying a small positive voltage (+0.8 V, 1 min). The electrostatic force could successfully remove the positively charged melamine to refresh the substrate (relative standard deviation < 11.4%). Finally, an LOD of 0.01 ppm in PBS and an LOD of 0.3 ppm in milk were obtained, which were low enough for the established maximum allowed levels (1 ppm) in powdered infant formula. Except for substrates with defined nanostructure morphologies (such as NPs, nanopillar forests, and nanodot arrays, among others), suspended metallic NPs have been used as "mobile" SERS-active substrates, especially in microfluidic devices [101][102][103]. Recently, Ling et al. reported using plasmonic liquid marbles (PLMs) in SERS SEC, as shown in Figure 5(b1-b3) [20]. In this work, the three-dimensional (3D) PLM covered by a shell of Ag nanocubes (Ag @ PLM, Ag nanocubes' average edge length is around 133 ± 9 nm) was prepared as a lab-on-a-droplet microliter-scale SEC cell. Ag @ PLM was exploited as a bifunctional SERS platform and concurrently as a WE for redox process modulation. Remarkably, the PLM's synergistic electrochemical and SERS capability elucidated critical insights into the electrochemical reaction mechanism and molecular structural changes of ruthenium hexammine (III) chloride and toxin methylene blue. Finally, this novel 3D SERS SEC cell exhibited two-fold and ten-fold better electrochemical-and SERS activities than conventional 2D counterparts. However, the application of such kind of "mobile" SERS substrates can cause some problems in actual use, which we should also keep in mind: (i) contamination and clogging issues [104]; (ii) interference with the other downstream bio/chemical processes; and (iii) poor reproducibility, which is due to batch-to-batch variances of NP synthesis as well as aging of the colloidal suspensions [105,106]. Nuclear Magnetic Resonance SEC (NMR SEC) When studying electrochemical systems, it would be beneficial to obtain (either concentration or structural) information on the reaction reagents, intermediates, and products during the electrochemical reactions to determine the possible reaction pathways. Among the spectroscopy techniques, nuclear magnetic resonance (NMR) spectroscopy is one of the frequently used techniques to elucidate the molecular structures of target analytes. Meanwhile, it is well suited for coupling with in situ electrochemical techniques [107]. It typically operates within the radio frequencies of 60 to 100 MHz. These low-energy waves Nuclear Magnetic Resonance SEC (NMR SEC) When studying electrochemical systems, it would be beneficial to obtain (either concentration or structural) information on the reaction reagents, intermediates, and products during the electrochemical reactions to determine the possible reaction pathways. Among the spectroscopy techniques, nuclear magnetic resonance (NMR) spectroscopy is one of the frequently used techniques to elucidate the molecular structures of target analytes. Meanwhile, it is well suited for coupling with in situ electrochemical techniques [107]. It typically operates within the radio frequencies of 60 to 100 MHz. These low-energy waves can interact with nuclei with magnetic spins, such as isotopes 1 H, 15 N, and 13 C. For NMR SEC, the different spin states of nuclei become separated with a powerful magnetic field. The surrounding atoms and functional groups in a molecule influence how strongly the outside magnetic field affects the target nucleus locally. Consequently, NMR SEC can obtain comprehensive structural information on the molecules. NMR SEC has investigated electrocatalytic processes [108,109], reaction intermediates [110,111], reagents, and product concentrations [112][113][114][115]. NMR SEC has been used to model the redox reaction processes of different analytes (such as hydroquinone (QH 2 ) and phenacetin), as summarized in Table 2 [116,117]. Although the NMR SEC technology has been dramatically improved and has shown immense potential, compared with the above UV-Vis SEC and SERS SEC techniques, NMR SEC has been limited to a few specialized groups since no commercial NMR SEC cells can be easily assembled for routine measurements [78]. Richards et al. did pioneering work combining NMR with in situ electrolysis or NMR SEC in 1975 [118]. In Richards et al.'s seminal work, the flow cell and NMR tube were integrated into a two-electrode NMR SEC cell. A mercury (Hg)-coated Pt wire was used as the WE, and an uncoated Pt wire was used as a CE, as depicted in Table 3. Electrolysis products of trans-1-phenyl-1-buten-3-one (C 6 H 5 CHCHCOCH 3 ) were released into the detection region from the exit capillary (flow rate ≥ 0.2 mL/min). The successful observation of the reduction of C 6 H 5 CHCHCOCH 3 to 1-phenyl-3-butanone (C 6 H 5 CH 2 CH 2 COCH 3 ) in the alkaline environment validated the successful combination of NMR with electrochemistry. Deteriorations of Magnetic Field in NMR SEC However, in most of the proposed electrochemical NMR cells, the electrodes are placed inside the NMR coil, which will deteriorate the magnetic field homogeneity and reduce the signal-to-noise ratio. The conducting metallic electrodes disrupt the homogeneity of the magnetic field, a critical requirement for NMR [119,120]. Thus, NMR SEC is more complicated than other SECs [116]. A few research groups have made great efforts to address the electrode structure's problem and reduce or eliminate the disruption in the homogeneous magnetic field caused by the electrodes [11,121,122]. One detailed summary of the NMR SEC's development is in Table 3. This table summarizes some representative work and shows the electrochemical cell design with the modified electrode structure. The continual optimization of NMR SEC electrochemical cells and the usage of new electrode materials has led to many novel studies of NMR SECs. The main approaches to overcome the challenges include: (i) Placing the electrodes outside the detection region [117,123,124]; (ii) Using secondary coils [107] or radiofrequency chokes [122,123,125,126]; (iii) Using thin film metallic electrodes [121,127,128]; (iv) Using nonmetallic electrodes such as carbon microfibers and polymer electrodes [11,122,129]. NMR SEC for Ethanol Oxidation Reaction Application (Regular Electrode Configuration) Direct ethanol fuel cells have aroused tremendous research interest because of their high energy density, environmental friendliness, easy refueling, and low operating temperatures [130,131]. Therefore, to monitor molecular changes of reaction products and unveil the reaction mechanism of the ethanol oxidation reaction (EOR), Wang et al. introduced the in situ real-time setup of electrochemical NMR (EC-NMR), as shown in Figure 6(a1), in which a Pt wire is serving as the CE, and an Ag wire is serving as the RE [77]. An ITO electrode decorated using hybrid materials of small-sized (~5.4 nm) PtNPs supported on molybdenum disulfide combined with graphene nanosheets is used as the WE. Employing in situ NMR, molecular information of the products and reactants is studied simultaneously during the electrochemical process. It successfully fulfills the purpose of elucidating the reaction mechanism of EOR, as shown in Figure 6(a2). NMR SEC for QH 2 Application (Using Polymer Electrode) As we mentioned above, to minimize the interference to the magnetic field homogeneity and obtain a high signal-to-noise ratio, the recent use of conductive polymer polyaniline (PAn) to form ITO/PAn composite WE in NMR SEC has attracted wide attention [11]. The high conductivity, good redox reversibility, and excellent environmental stability of PAn have made it a material of choice in electrocatalysis [132]. Figure 6(b1,b2) describe the EC-NMR cell that uses ITO/PAn to composite the WE. The device was used to monitor the oxidation process of hydroquinone (QH 2 ) for the first time ( Figure 6(b3)). The high sensitivity of the NMR SEC technique allows the authors to monitor the generation of products quantitatively and precisely under varied solvent composition ratios and pH values. NMR SEC for Ascorbic Acid Application (Using Magnetohydrodynamic Effect) As electrodes or flow cells, ultra-thin metallic films require complex fabrication protocols. In addition, nonmetallic electrodes usually have limited electrochemical applications due to the low achievable currents [78]. To avoid these limitations, one frequently used methodology is to place the metallic electrode above the NMR detection area, as shown in Figure 6(c1). Here, one interesting point is introducing the magnetohydrodynamic (MHD) effect instead of simply placing the electrode above the NMR detection area. The main force acting to create this effect is the magnetic force, which results from the cross-product between the ionic current density and the external magnetic field [133,134]. The stirring force generated by the MHD effect can perfectly homogenize the reagent and product concentration in the detection region, allowing the NMR to sense the analytes in real time. As shown in Figure 6(c1), Silva et al. used Pt wires to prepare the WE, CE, and Ag wires for making the RE. The electrodes were fixated on a capillary glass tube, inserted into a standard 5 mm NMR tube, and placed 0.5 mm above the NMR detection region. The in situ observation from AA to dehydroascorbic acid (DA) was successfully achieved (Figure 6(c2-c4)). Compared to the ex situ configuration, a two-time more significant conversion efficiency of the electrooxidation from AA to DA was observed. Micromachines 2023, 14, × FOR PEER REVIEW 14 of 28 studied simultaneously during the electrochemical process. It successfully fulfills the purpose of elucidating the reaction mechanism of EOR, as shown in Figure 6(a2). The different hydrogen atoms observed in the NMR spectra are highlighted. In addition, the 1H NMR spectra of the oxidation of ascorbic acid in situ (c3) and ex situ (c4) are shown. Reprinted from [78], with permission from Elsevier. (a2) Reaction mechanism of ethanol oxidation on Pt/MoS 2 /GNS. Reprinted from [77], with permission from Elsevier. (b1,b2) Electrochemical cell designed for in situ EC-NMR; (b3) The molecular structure of polyaniline (PAn) and hydroquinone (QH 2 ). Reprinted from [11], with permission from Elsevier. (c1) Electrochemical cell and electrodes. The WE, CE, and RE are fixated on the glass capillary tube. (c2) The different hydrogen atoms observed in the NMR spectra are highlighted. In addition, the 1H NMR spectra of the oxidation of ascorbic acid in situ (c3) and ex situ (c4) are shown. Reprinted from [78], with permission from Elsevier. The probe requires a relatively complex preparation process. . Advantages Without the need to modify the NMR probe Minimal influence on homogeneity magnetic field; Unmodified probe and outstanding resolution and sensitivity Easy to prepare and broad applicability, suitable for a large potential window Fast and with the capability of quantitatively monitoring the generation of products under varied solvent compositions and pH values Drawbacks Low resolution, line broadening, and toxic metal are used Slow diffusion from the inactive region Take a long time for results to be ready (~6 h) The probe requires a relatively complex preparation process. Localized Surface Plasmon Resonance (LSPR) in DFM SEC Nowadays, metallic NPs such as AuNPs, AgNPs, and GeNPs have emerged as a class of materials with unique optical [31,135,136], catalytic [137,138], mechanical [139,140], and biological properties [31,[141][142][143]. AgNPs have been widely used in customer products (such as phones, refrigerators, etc.) and medicine due to their exceptional antibacterial and anti-inflammatory effects [144,145]. AuNPs have been used to detect bacteria [146], solar cells [147], and ionic electrolytes [148]. A representative illumination of an NP's localized surface plasmon resonance (LSPR) is shown in Figure 7a. The confined electrons in the conduction band, the electron cloud, are displaced by an incoming electric field (light) [149]. The negatively charged electron cloud is withdrawn again due to the Coulomb Forces of the remaining fixed and positively charged nuclei. and anti-inflammatory effects [144,145]. AuNPs have been used to detect bacteria [146], solar cells [147], and ionic electrolytes [148]. A representative illumination of an NP's localized surface plasmon resonance (LSPR) is shown in Figure 7a. The confined electrons in the conduction band, the electron cloud, are displaced by an incoming electric field (light) [149]. The negatively charged electron cloud is withdrawn again due to the Coulomb Forces of the remaining fixed and positively charged nuclei. Electron-rich metal NPs, such as AuNPs, AgNPs, and PtNPs, can exhibit an intrinsic LSPR, which is defined by the particle size, shape, composition, interparticle spacing, and dielectric properties of the local environment that surrounds the particles (Figure 7b) [14]. Therefore, an NP's oxidizing/reducing in aqueous suspensions will lead to a shift in the LSPR frequency. Based on this property, the NP's chemical state can be traced and analyzed by probing the changes in the spectral extinction spectra. However, such information cannot be accessed by the established non-spectrally resolved optical methods, which only monitor the signal intensity [150]. Furthermore, it is well demonstrated that the redox potential of NPs is size-dependent [151][152][153]. The synthesis protocols for NPs will usually lead to an inherent heterogeneity in the NP size distribution [154,155]. Therefore, studies that can look at a single NP within a heterogeneous NP distribution will allow researchers to observe trends that are not possible through existing ensemble electrochemical techniques. Electron-rich metal NPs, such as AuNPs, AgNPs, and PtNPs, can exhibit an intrinsic LSPR, which is defined by the particle size, shape, composition, interparticle spacing, and dielectric properties of the local environment that surrounds the particles (Figure 7b) [14]. Therefore, an NP's oxidizing/reducing in aqueous suspensions will lead to a shift in the LSPR frequency. Based on this property, the NP's chemical state can be traced and analyzed by probing the changes in the spectral extinction spectra. However, such information cannot be accessed by the established non-spectrally resolved optical methods, which only monitor the signal intensity [150]. Furthermore, it is well demonstrated that the redox potential of NPs is size-dependent [151][152][153]. The synthesis protocols for NPs will usually lead to an inherent heterogeneity in the NP size distribution [154,155]. Therefore, studies that can look at a single NP within a heterogeneous NP distribution will allow researchers to observe trends that are not possible through existing ensemble electrochemical techniques. Applications of DFM SEC The dark-field scattering technique is receiving more and more attention since the ability to reveal changes at a single-entity/nanoscale level is not readily accessible by the established in situ measurements such as fluorescence spectroscopy and SERS [14]. Dark-field microscopy (DFM), in conjunction with electrochemical techniques (hereafter, DFM SEC), allows direct observation of the chemical reactions occurring on a single NP. In Figure 7c, the light is scattered by the NPs on the slide. The NPs on the slide are thus brilliantly illuminated against the dark background. Then the LSPR is tracked by scattering in the near-infrared region [60,[156][157][158][159]. The LSPR extinction maximum of the NP can be measured with DFM and recorded using an electron multiplying charge-coupled device (EMCCD) camera. Consequently, DFM SEC is applied to study the oxidation processes on a single NP. Nanostructured electron-rich metals such as Au, Ag, and Pt exhibit strong lightscattering and absorption characteristics when their surface electrons are optically excited under resonance conditions from surface plasmons [80,[159][160][161][162]. AuNPs have been widely used in clean energy transformations due to their excellent and stable catalytic efficiency [80]. In this paper, shown in Figure 8(a1,a2), Pan's group used hydrazine as a model to study the local catalytic activities and structure-functionality relationships at a single AuNP level. The kinetics of electrocatalytic oxidation of hydrazine at AuNPs were analyzed in real time using the light-scattering SEC method at planar and miniaturized ITO electrodes. Compared with the NP detection method based on spontaneous collision events, the ITO ultramicroelectrode technique and DFS provided a better understanding of the catalytic reactions and their reproducibility. Applications of DFM SEC The dark-field scattering technique is receiving more and more attention since the ability to reveal changes at a single-entity/nanoscale level is not readily accessible by the established in situ measurements such as fluorescence spectroscopy and SERS [14]. Dark-field microscopy (DFM), in conjunction with electrochemical techniques (hereafter, DFM SEC), allows direct observation of the chemical reactions occurring on a single NP. In Figure 7c, the light is scattered by the NPs on the slide. The NPs on the slide are thus brilliantly illuminated against the dark background. Then the LSPR is tracked by scattering in the near-infrared region [60,[156][157][158][159]. The LSPR extinction maximum of the NP can be measured with DFM and recorded using an electron multiplying charge-coupled device (EMCCD) camera. Consequently, DFM SEC is applied to study the oxidation processes on a single NP. Nanostructured electron-rich metals such as Au, Ag, and Pt exhibit strong light-scattering and absorption characteristics when their surface electrons are optically excited under resonance conditions from surface plasmons [80,[159][160][161][162]. AuNPs have been widely used in clean energy transformations due to their excellent and stable catalytic efficiency [80]. In this paper, shown in Figure 8(a1,a2), Pan's group used hydrazine as a model to study the local catalytic activities and structure-functionality relationships at a single AuNP level. The kinetics of electrocatalytic oxidation of hydrazine at AuNPs were analyzed in real time using the light-scattering SEC method at planar and miniaturized ITO electrodes. Compared with the NP detection method based on spontaneous collision events, the ITO ultramicroelectrode technique and DFS provided a better understanding of the catalytic reactions and their reproducibility. Figure 8(b1) is Kevin et al.'s DFM SEC device schematic. Hyperspectral imaging (HSI) and a CCD camera were used to record the changes in spectral position and LSPR intensity during the CV (50 mV/s) electrochemical process. Figure 8(b2) gives the CCD snapshots of individual AgNPs under different applied potentials during the CV. This allowed the researchers to observe and analyze the redox process of AgNPs in the presence of Cl − . Upon applying a~0.1V potential, the AgNPs oxidized to Ag chloride (AgCl). The AgCl continued to oxidize to Ag 2 O 3 or AgClO 2 with an increase in potential (~1 V). During reverse scanning (a decrease in the potential down to~0 V), the oxides were reduced to AgNP. It is worthwhile to note that this work also provided a comprehensive microparticle characterization method. In addition to different NPs, different nanostructured metals are used in the DFM SEC technique. The excellent LSPR characteristics and high anisotropy make Au triangular nanoplates (AuTNPs) stand out due to their sharp vertices, providing electricfield-enhanced hotspots [160,161]. A more recent experimental work by Gu et al. shows the successful use of AuTNPs to monitor the pyrophosphate (PPi) sensitively and selectively [160,163]. This critical biological anion plays significant roles in various fundamental physiological processes (such as cellular metabolism and RNA and DNA polymerizations) [164][165][166]. This work was based on the inhabitation effects of PPi against the etching of AuNPLs in the Cu 2+ and I − ions solution. The etching of AuNPLs by the Cu 2+ and I − ions lead to a blue shift and intensity decrease in the LSPR scattering spectra of AuNPLs. However, adding PPi can prohibit the etching of AuNPLs due to the strong affinity of PPi to Cu 2+ ions. Based on these facts, Gu et al. successfully established a simple, sensitive, and selective single-particle analysis platform for quantitatively detecting PPi, even for real biological samples. SEC Techniques' Applications in Microfluidics It is well known that the often-cited advantages of microfluidics, including faster response times, lower reagent volumes, and potential for integration, are significant considerations in the research work [167]. After studying the recent developments of SEC techniques, one interesting fact is that the combination of SEC and microfluidics is becoming a trend in research papers [104,168]. However, due to the limited use of the DFM technique and the confined cell structure of NMR, the present applications mainly rely on the combination of SERS/Raman-or UV-Vis-based SEC + microfluidics. Applications of SERS/Raman SEC in Microfluidics Based on limited available papers, a few representative studies done with the integrated techniques are shown in Figure 9. One relatively straightforward but attractive configuration is proposed by Singh et al. for the highly sensitive detection of okadaic acid (OA) [168]. In this combined detection module, the microfluidic chip was employed to mix OA and the OA aptamer well. The phosphorene-gold nanocomposite-modified screen-printed carbon electrode (SPCE), which posed an affinity to the OA aptamer, was subsequently analyzed. The high performance of OA detection, whether qualitative or quantitative, demonstrated that the proposed point-of-care device can be deployed to perform on-farm assays in fishing units. Figure 9. (a) Fabrication steps for a three-electrode microfluidic device. Right: The process to produce a PDMS microfluidic device with a three-electrode configuration. Left: As-assembled microfluidic device with embedded electrodes. Reprinted from [169], with permission from ACS Publications. (b) In situ SERS SEC analysis system. Reprinted from [93], with permission from ACS Publications. (c) Illustration of the microfluidic setup for SERS measurements. Reprinted from [104], with permission from ACS Publications. "Immobile" SERS-active substrate means nanostructures with defined morphology (such as NPs, nanopillar forests, and nanodot arrays, among others) are permanently attached to substrates. For example, in the recent work published by Triroj et al., a diamondlike carbon thin film was prepared as a biosensing platform/substrate in the microfluidic device, as shown in Figure 9a [169]. An in situ microfluidic analysis system is reported by Yuan et al. using nanostructured Au surfaces as the WE and simultaneously SERS-active substrate [93]. Information about the microfluidic device and nanostructured Au substrate can be found in Figure 9b. However, a drawback of "Immobile" SERS substrates is that they are intended for one-time use only [170]. With the pursuit of repeating "Immobile" SERS, Belder et al. successfully fulfilled the regeneration of the SERS substrate by applying pulsed voltages, which had been demonstrated with high reproducibility. This work incorporated the chemically roughened silver wire into the microfluidic chip and it was used for SERS measurements. The electrical regeneration process for the silver wire SERS substrate by applying a potential to clean the SERS substrate was achieved based on the proposed structure (Figure 9b). Furthermore, the high reproducibility of Malachite green's Raman spectra confirmed the achievement for the purpose of multiple recycling of the same SERS substrate. Applications of UV-Vis SEC in Microfluidics Similarly, the combined methodology of UV-Vis SEC and microfluidics has been widely used in biotechnology, catalysis, environmental protection, and others [7,[171][172][173][174]. However, compared with the SERS/Raman SEC, the employment of UV-Vis SEC in microfluidics is more used, which is probably because the electrode substrate can be more easily prepared. As shown in Figure 10(a1-a3), in this interesting work reported by Colina et al., one easy method to employ or transfer commercial SWCNTs to different nonconductor and transparent supports as the WE is reported [174]. This work removes the often- Figure 9. (a) Fabrication steps for a three-electrode microfluidic device. Right: The process to produce a PDMS microfluidic device with a three-electrode configuration. Left: As-assembled microfluidic device with embedded electrodes. Reprinted from [169], with permission from ACS Publications. (b) In situ SERS SEC analysis system. Reprinted from [93], with permission from ACS Publications. (c) Illustration of the microfluidic setup for SERS measurements. Reprinted from [104], with permission from ACS Publications. "Immobile" SERS-active substrate means nanostructures with defined morphology (such as NPs, nanopillar forests, and nanodot arrays, among others) are permanently attached to substrates. For example, in the recent work published by Triroj et al., a diamondlike carbon thin film was prepared as a biosensing platform/substrate in the microfluidic device, as shown in Figure 9a [169]. An in situ microfluidic analysis system is reported by Yuan et al. using nanostructured Au surfaces as the WE and simultaneously SERS-active substrate [93]. Information about the microfluidic device and nanostructured Au substrate can be found in Figure 9b. However, a drawback of "Immobile" SERS substrates is that they are intended for one-time use only [170]. With the pursuit of repeating "Immobile" SERS, Belder et al. successfully fulfilled the regeneration of the SERS substrate by applying pulsed voltages, which had been demonstrated with high reproducibility. This work incorporated the chemically roughened silver wire into the microfluidic chip and it was used for SERS measurements. The electrical regeneration process for the silver wire SERS substrate by applying a potential to clean the SERS substrate was achieved based on the proposed structure (Figure 9b). Furthermore, the high reproducibility of Malachite green's Raman spectra confirmed the achievement for the purpose of multiple recycling of the same SERS substrate. Applications of UV-Vis SEC in Microfluidics Similarly, the combined methodology of UV-Vis SEC and microfluidics has been widely used in biotechnology, catalysis, environmental protection, and others [7,[171][172][173][174]. However, compared with the SERS/Raman SEC, the employment of UV-Vis SEC in microfluidics is more used, which is probably because the electrode substrate can be more easily prepared. As shown in Figure 10(a1-a3), in this interesting work reported by Colina et al., one easy method to employ or transfer commercial SWCNTs to different nonconductor and transparent supports as the WE is reported [174]. This work removes the often-employed hydraulic press step from the WE preparation process, significantly expanding the possibility of transferring the SWCNT film to almost any support. Another interesting point of this work is the employment of bidimensional SEC technology. As shown in Figure 10(a3), two different light beam arrangements, namely, normal and parallel transmission arrangements, are integrated into the same device to collect complementary information during the ferrocenemethanol electrode processes. Another interesting microfluidic device for UV-Vis SEC is proposed by Wang et al. [175]. A parallel transmission arrangement was adopted in this paper, which avoided the OTEs. Spectral measurements were made using an "in-house" constructed visible micro spectrometer which consisted of a deuterium/tungsten-halogen light source and a CCD spectrometer. Micromachines 2023, 14, × FOR PEER REVIEW 20 of 28 employed hydraulic press step from the WE preparation process, significantly expanding the possibility of transferring the SWCNT film to almost any support. Another interesting point of this work is the employment of bidimensional SEC technology. As shown in Figure 10(a3), two different light beam arrangements, namely, normal and parallel transmission arrangements, are integrated into the same device to collect complementary information during the ferrocenemethanol electrode processes. Another interesting microfluidic device for UV-Vis SEC is proposed by Wang et al. [175]. A parallel transmission arrangement was adopted in this paper, which avoided the OTEs. Spectral measurements were made using an "in-house" constructed visible micro spectrometer which consisted of a deuterium/tungsten-halogen light source and a CCD spectrometer. Figure 10. (a1) a Photograph of the assembled cell ready to measure, (a2) a schematic view of the disassembled cell, and (a3) a detailed schematic view of the experimental setup. Reprinted from [174], with permission from ACS Publications. (b) Illustration of the microfluidic setup for UV-Vis SEC measurements. Reprinted from [175], with permission from Elsevier. (c) A schematic diagram of H2O2 detection on electrochemical POC devices with Au@PtNP/GO nanozymes. Reprinted from [172], with permission from Elsevier. In the work shown in Figure 10b, Seong et al. first reported one electrochemical pointof-care device with nanozymes for the high quantification of hydrogen peroxide (H2O2), a molecule for signaling within cells [172]. The electrodes (WE, CE, RE) were prepared using the market-available ITO electrodes. Then, as depicted in Figure 10b, the artificial nanostructured enzymes were immobilized in the microfluidics channel, showing a robust catalytic activity toward 3,3′,5,5′-tetramethylbenzidine (TMB) substrate in the presence of H2O2. The oxidized TMB with a blue color was subsequently analyzed using the UV-Vis SEC technique. Finally, based on the proposed device structure, a broad detection of H2O2 ranging from 1 µM to 3 mM and a low LOD of 1.62 µM were successfully obtained. Summary & Outlook We have detailed the recent developments in composite SEC techniques, including UV-Vis SEC, Raman SEC, DFM SEC, NMR SEC, and recent progress in combining SEC techniques and microfluidics. In addition, a detailed analysis of the working principle and problems encountered in the selected applications are summarized. As mentioned above, the combination of electrochemistry and spectroscopy (SEC techniques) has been applied to diverse research fields ranging from the electron transfer process [55,176], reaction In the work shown in Figure 10b, Seong et al. first reported one electrochemical pointof-care device with nanozymes for the high quantification of hydrogen peroxide (H 2 O 2 ), a molecule for signaling within cells [172]. The electrodes (WE, CE, RE) were prepared using the market-available ITO electrodes. Then, as depicted in Figure 10b, the artificial nanostructured enzymes were immobilized in the microfluidics channel, showing a robust catalytic activity toward 3,3 ,5,5 -tetramethylbenzidine (TMB) substrate in the presence of H 2 O 2 . The oxidized TMB with a blue color was subsequently analyzed using the UV-Vis SEC technique. Finally, based on the proposed device structure, a broad detection of H 2 O 2 ranging from 1 µM to 3 mM and a low LOD of 1.62 µM were successfully obtained. Summary & Outlook We have detailed the recent developments in composite SEC techniques, including UV-Vis SEC, Raman SEC, DFM SEC, NMR SEC, and recent progress in combining SEC techniques and microfluidics. In addition, a detailed analysis of the working principle and problems encountered in the selected applications are summarized. As mentioned above, the combination of electrochemistry and spectroscopy (SEC techniques) has been applied to diverse research fields ranging from the electron transfer process [55,176], reaction mechanisms [167], forensics sciences [177], and determination of intermediates and final products in electrochemical reactions [112,113]. Furthermore, the continuous advancement in nanotechnologies and the use of new materials (NPs [20,77], conductive polymers [11], and composite materials [17,178]) have further promoted SEC techniques. However, each of the SEC techniques mentioned above is still suffering some limitations, from the lab-scale to widespread practical use, as summarized below: UV-Vis SEC-For the UV-Vis SEC technique, as shown in Figure 2, OTEs are almost an inevitable topic in the normal transmission arrangement. (i) However, the frequently used OTEs such as ITO and FTO have the issue of fewer negative inert potential windows, and the thin film metallic OTEs are limited to electrochemical studies requiring high potentials due to the corresponding metal oxidation [50]. Therefore, considering the limitations of OTEs, more and more people are choosing parallel arrangement configurations, as summarized in Table 2. Compared with normal transmission arrangements, the parallel arrangement configuration is more favorable for conducting bidimensional SEC techniques [10]. (ii) However, in the parallel working mode, a perfect but difficult alignment of the light beams is required, complicating the operation process. SERS SEC-For the SERS SEC technique, according to the latest statistics from the website of the web of science, compared with other SEC technologies, the Raman SEC technique is the most widely used one (Figure 1b). More and more combinations between SERS and electrochemistry have been used, considering the huge enhancement factor of the Raman signal [86]. Different metallic/composite NPs or other confined nanostructure morphologies such as nanopillar forests and nanodot arrays have been prepared and studied [101][102][103]. However, (i) the need for nanostructured SERS-active substrates will undoubtedly increase the experiment's difficulty, cost, and time. (ii) An important issue is their reproducibility, considering the inherent batch-to-batch variances of NPs synthesis and the difficulty of storing. The other difficulties include (iii) background noise in the Raman signal and (iv) complicated instrumentation for the incorporation into a point-ofcare or point-of-use system. Though handheld Raman spectroscopes exist, they are limited by their resolution and bandwidth. Hence, developing the Raman active SERS substrate will be a critical area of research for the broad application of this SERS SEC technique. NMR SEC-For the NMR SEC technique, this seems to be the most versatile as a secondary technique for identifying the molecular signature of the captured chemical moieties or the small biomolecules. However, due to the low sensitivity issue of the NMR technique, most of the uses for NMR SEC are focused on collecting information on a reaction intermediate during the electrochemical process to determine the possible reaction pathways [77,78,107]. The limitations to the widespread use of NMR SEC are: (i) the deterioration of magnetic field homogeneity due to metallic conducting electrodes; and (ii) using thin metallic or nonmetallic electrodes such as carbon microfibers or polymer electrodes usually requires complex fabrication protocols. Furthermore, nonmetallic electrodes usually have limited electrochemical applications due to the low achievable currents. DFM SEC-For the DFM SEC technique, unlike the other SEC techniques, this focuses more on the study between structural characteristics and catalyst activities from a single NP level. Since the understanding at the nanoscale level is critical to designing and producing stable and high-performance catalysts, however, by studying articles published in recent years (Figure 1b), we can see that before the widespread applications in the research community, there is a considerable gap for this technology to cross. The reasons are listed here: (i) this technology has high requirements for the electrode materials: OTEs are required in DFM SEC. (ii) Tedious coupling procedures of the light and electric paths. (iii) Further, DFM SEC setups require extensive optics and might not be easy to be incorporated into a point-of-care or point-of-use system. (iv) The reliability and device-to-device variation in DFM SEC are also concerns. In summary, we have detailed the recent developments in composite SEC techniques, including UV-Vis SEC, SERS SEC, NMR SEC, and DFM SEC. A detailed analysis of their working principle challenges encountered in their applications and recent development directions are summarized. Using SEC techniques and microfluidics is becoming one interesting trend within research fields, ranging from biotechnology, catalysis, environmental protection, and others [7,171,172]. Of note, from the summary (Table 2), some articles employed the in situ bidimensional SEC methodology in their study, which will build a more comprehensive understanding of the reagent's reaction mechanism, electron transfer mechanism, intermediates, the concentration of products, and relevant reaction pathway. Although there are many compatibility problems, mutual interferences, optical road layouts, and so on, this will be one of the promising development directions in the future. Although these SEC techniques have many limitations to break though, with the continual development of new functional materials and nanotechnology, people will finally solve these problems (such as the OTEs in UV-Vis/DFM SEC, reusability of SERS-active substrates in SERS SEC, or inhomogeneity of the magnetic field in NMR SEC) existing in SEC techniques. This review article will enable more people, either established researchers or novices, to become familiar with the use of SEC technology and ultimately achieve the goal of promoting SEC technology.
2023-03-19T15:19:14.969Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "a3de200774a405275aed2152c651acfddc3b7479", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/14/3/667/pdf?version=1679027347", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "435175ce37813e0e7d5b5f73a183249e6eda2afe", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
150373934
pes2o/s2orc
v3-fos-license
The Fourier transform on harmonic manifolds of purely exponential volume growth Let $X$ be a complete, simply connected harmonic manifold of purely exponential volume growth. This class contains all non-flat harmonic manifolds of non-positive curvature and, in particular all known examples of harmonic manifolds except for the flat spaces. Denote by $h>0$ the mean curvature of horospheres in $X$, and set $\rho = h/2$. Fixing a basepoint $o \in X$, for $\xi \in \partial X$, denote by $B_{\xi}$ the Busemann function at $\xi$ such that $B_{\xi}(o) = 0$. then for $\lambda \in \C$ the function $e^{(i\lambda - \rho)B_{\xi}}$ is an eigenfunction of the Laplace-Beltrami operator with eigenvalue $-(\lambda^2 + \rho^2)$. For a function $f$ on $X$, we define the Fourier transform of $f$ by $$\tilde{f}(\lambda, \xi) := \int_X f(x) e^{(-i\lambda - \rho)B_{\xi}(x)} dvol(x)$$ for all $\lambda \in \C, \xi \in \partial X$ for which the integral converges. We prove a Fourier inversion formula $$f(x) = C_0 \int_{0}^{\infty} \int_{\partial X} \tilde{f}(\lambda, \xi) e^{(i\lambda - \rho)B_{\xi}(x)} d\lambda_o(\xi) |c(\lambda)|^{-2} d\lambda$$ for $f \in C^{\infty}_c(X)$, where $c$ is a certain function on $\mathbb{R} - \{0\}$, $\lambda_o$ is the visibility measure on $\partial X$ with respect to the basepoint $o \in X$ and $C_0>0$ is a constant. We also prove a Plancherel theorem, and a version of the Kunze-Stein phenomenon. non-constant harmonic function on a punctured neighbourhood of x which is radial around x, i.e. only depends on the geodesic distance from x. Copson and Ruse showed that this is equivalent to requiring that sufficiently small geodesic spheres centered at x have constant mean curvature, and moreover such manifolds are Einstein manifolds [CR40]. Hence they have constant curvature in dimensions 2 and 3. The Euclidean spaces and rank one symmetric spaces are examples of harmonic manifolds. The Lichnerowicz conjecture asserts that conversely any harmonic manifold is either flat or locally symmetric of rank one. The conjecture was proved for harmonic manifolds of dimension 4 by A. G. Walker [Wal48]. In 1990 Z. I. Szabo proved the conjecture for compact simply connected harmonic manifolds [Sza90]. In 1995 G. Besson, G. Courtois and S. Gallot proved the conjecture for manifolds of negative curvature admitting a compact quotient [BCG95], using rigidity results from hyperbolic dynamics including the work of Y. Benoist, P. Foulon and F. Labourie [BFL92] and that of P. Foulon and F. Labourie [FL92]. In 2005 Y. Nikolayevsky proved the conjecture for harmonic manifolds of dimension 5, showing that these must in fact have constant curvature [Nik05]. Another fundamental result states that harmonic manifolds of subexponential volume growth are flat [RS02]. In 1992 however E. Damek and F. Ricci had already provided in the non-compact case a family of counterexamples to the Lichnerowicz conjecture, which have come to be known as harmonic NA groups, or Damek-Ricci spaces [DR92]. These are solvable Lie groups X = N A with a suitable left-invariant Riemannian metric, given by the semi-direct product of a nilpotent Lie group N of Heisenberg type (see [Kap80]) with A = R + acting on N by anisotropic dilations. While the noncompact rank one symmetric spaces G/K may be identified with harmonic N A groups (apart from the real hyperbolic spaces), there are examples of harmonic N A groups which are not symmetric. In 2006, J. Heber proved that the only complete simply connected homogeneous harmonic manifolds are the Euclidean spaces, rank one symmetric spaces, and harmonic N A groups [Heb06]. Though the harmonic N A groups are not symmetric in general, there is still a well developed theory of harmonic analysis on these spaces which parallels that of the symmetric spaces G/K. For a non-compact symmetric space X = G/K, an important role in the analysis on these spaces is played by the well-known Helgason Fourier transform [Hel94]. For harmonic N A groups, F. Astengo, R. Camporesi and B. Di Blasio have defined a Fourier transform [ACB97], which reduces to the Helgason Fourier transform when the space is symmetric. In both cases a Fourier inversion formula and a Plancherel theorem hold. The aim of the present article is to generalize these results to a large class of non-compact harmonic manifolds. Our analysis will be concerned with harmonic manifolds of purely exponential volume growth which include all non-flat harmonic manifolds of non-positive sectional curvature or, more generally, all non-flat harmonic manifolds without focal points (see [Kni12,Theorem 6.5]). In particular this class includes all known examples of non-flat and non-compact harmonic manifolds. By purely exponential volume growth, we mean that there are constants C > 1, h > 0 such that for all R > 1 the volume of metric balls B(x, R) of radius R and center x ∈ X is given by (1) 1 C e hR ≤ vol(B(x, R)) ≤ Ce hR . Let X be a simply connected harmonic manifold of purely exponential volume growth with a fixed basepoint o ∈ X. It was shown in [Kni12] that for harmonic manifolds the condition of purely exponential volume growth is equivalent to Gromov hyperbolicity. Moreover, it follows from the work in [KP16] that the Gromov boundary agrees with the visibility boundary ∂X introduced in [EO73]. The set X ∪ ∂X equipped with the cone topology defines a topological space homeomorphic to a closed unit ball in R n , where n = dim X. For a given ξ ∈ ∂X and any geodesic ray γ : [0, ∞) → X representing ξ (see section 2 for a precise definition) the Busemann function B ξ with B ξ (o) = 0 is given by B ξ (y) = lim t→∞ (d(y, γ(t)) − d(o, γ(t))). The level sets of B ξ are called horospheres in X. The manifold X, being harmonic, is also asymptotically harmonic, i.e. the mean curvature of all horospheres is equal to a constant h ≥ 0. If X has purely exponential volume growth then h is positive and agrees with the constant h appearing in (1). An easy computation shows that for ρ = h/2 and any λ ∈ C and ξ ∈ ∂X, the function f = e (iλ−ρ)B ξ is an eigenfunction of the Laplace-Beltrami operator ∆ on X with eigenvalue −(λ 2 + ρ 2 ). The Fourier transform of a function f ∈ C ∞ c (X) is then defined to be the function on C × ∂X given byf When X is a non-compact rank one symmetric space, this reduces to the Helgason Fourier transform. The normalized canonical measure of the unit tangent sphere T 1 o X induced by the Riemannian metric is denoted by θ o . The unit tangent sphere T 1 o X is identified with the boundary ∂X via the homeomorphism pr o : v ∈ T 1 o X → ξ = γ v (∞) ∈ ∂X, where γ v is the unique geodesic ray with γ ′ v (0) = v. Pushing forward the measure θ o on T 1 o X by the map pr o gives a measure on ∂X called the visibility measure, which we denote by λ o . We have the following Fourier inversion formula: Theorem 1.1. Let (X, g) be a simply connected, harmonic manifold of purely exponential volume growth. Then there is a constant C 0 > 0 and a function c on for all x ∈ X. The function c in the previous two theorems is holomorphic on Im λ < 0 and has the following integral representation: Theorem 1.3. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth and c be the c-function of the radial hypergroup of X. Let Im λ < 0. Then we have for any x ∈ X, ξ ∈ ∂X, where (ξ|η) x is the Gromov product on X given in Definition 2.2 below. We define a notion of convolution with radial functions and prove the following version of the Kunze-Stein phenomenon: Theorem 1.4. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth. Let x ∈ X and let 1 ≤ p < 2. Let g ∈ C ∞ c (X) be radial around the point x. Then for any f ∈ C ∞ c (X) the inequality ||f * g|| 2 ≤ C p ||g|| p ||f || 2 holds for some constant C p > 0. It follows that for any g ∈ L p (X) radial around x, the map f ∈ C ∞ c (X) → f * g extends to a bounded linear operator on L 2 (X) with operator norm at most C p ||g|| p . The article is organized as follows. In section 2 we recall basic facts about harmonic manifolds which we require. In section 3 we compute the action of the Laplacian ∆ on spaces of functions constant on geodesic spheres and horospheres respectively. In section 4 we carry out the harmonic analysis of radial functions, i.e. functions constant on geodesic spheres centered around a given point. Unlike the well-known Jacobi analysis [Koo84] which applies to radial functions on rank one symmetric spaces and harmonic N A groups, our analysis here is based on hypergroups [BH95]. We define a spherical Fourier transform for radial functions, and obtain an inversion formula and Plancherel theorem for this transform. In section 5 we prove the inversion formula and Plancherel formula for the Fourier transform. The main point of the proof is an identity expressing radial eigenfunctions in terms of an integral over the boundary ∂X. The integral formula for the function c (Theorem 1.3) is proved in section 6. In section 7 we define an operation of convolution with radial functions, and show that the L 1 radial functions form a commutative Banach algebra under convolution. Finally in section 8 we prove a version of the Kunze-Stein phenomenon. Acknowledgements. The first author would like to thank Swagato K. Ray and Rudra P. Sarkar for generously sharing their time and knowledge over the course of numerous educative and enjoyable discussions. The other two authors like to thank the MFO for hospitality during their stay in the "Research in Pairs" program in 2019 and the SFB/TR191 "Symplectic structures in geometry, algebra and dynamics". This article generalizes an earlier version by the first author in the case of negatively curved harmonic manifolds. Basics about harmonic manifolds Throughout this article, we assume that all manifolds are complete. We start by presenting some fundamental facts about non-compact simply connected harmonic manifolds. References for this class of manifolds include [RWW61], [Sza90], [Wil93], [KP13] and [Kni16]. Such manifolds do not have conjugate points and, for every x ∈ X, the exponential map exp x : T x X → X is a diffeomorphism. (See e.g [Kni02] on basic geometric and dynamical properties of spaces without conjugate points.) The absence of conjugate points in X allows to define Busemann functions associated to geodesic rays These functions are of central importance in our paper and are given by The level sets of these functions are called horospheres and can be viewed as spheres with center at infinity. For any v ∈ T 1 x X and r > 0, let A(v, r) denote the Jacobian of the map v → exp x (rv). The definition of harmonicity given in the Introduction is equivalent to the fact that this Jacobian does not depend on v, i.e. there is a function A on (0, ∞) such that A(v, r) = A(r) for all v ∈ T 1 X. See [Wil93,p. 224] for the equivalence of this property with the property given in the Introduction. The function A is called the density function of the harmonic manifold. For x ∈ X, let d x denote the distance function from the point x, i.e. d x (y) = d(x, y). A function f on X is said to be radial around a point x of X if f is constant on geodesic spheres centered at x. For each x ∈ X, we can define a radialization operator M x , defined for a continuous function f on X by where S(x, r) denotes the geodesic sphere around x of radius r = d(x, z), and σ denotes surface area measure on this sphere (induced from the metric on X), normalized to have mass one. The operator M x maps continuous functions to functions radial around x, and is formally self-adjoint, meaning for all continuous functions f, h with compact support. Introducing polar coordinates around x this follows easily from where θ x is the normalized canonical measure on the unit tangent space T 1 x X induced by the Riemannian metric and γ v : R → X is the geodesic satisfying Using these concepts, we have the following equivalent conditions for harmonicity: (1) For any x ∈ X, ∆d x is radial around x. (2) The Laplacian ∆ = div • ∇ commutes with all the radialization operators M x , i.e. M x ∆u = ∆M x u for all smooth functions u on X and all x ∈ X. (3) For any smooth function u radial around any x ∈ X the function ∆u is radial around x, as well. Let us now discuss basic properties of the density function A(r) of a harmonic manifold. A(r) is increasing in r, and the quantity A ′ (r)/A(r) ≥ 0 equals the mean curvature of geodesic spheres S(x, r) of radius r, which decreases monotonically as r → ∞ (see [RS03, Corollary 2.1, Proposition 2.2] and [Kni02, Section 1.2]). Furthermore, the mean curvature (A ′ /A)(r) of the geodesic sphere S(x, r) at a point z ∈ S(x, r) equals ∆d x (z), hence we have The limit lim r→∞ A ′ (r)/A(r) is equal to the mean curvature h ≥ 0 of horospheres. Therefore, all harmonic manifolds are in particular asymptotically harmonic, meaning they are manifolds without conjugate points such that all horospheres have the same constant mean curvature. Using the density function A(r), harmonic manifolds are of purely exponential volume growth if and only if there exist constants C > 1, h > 0 such that we have for all R > 1 1 C e hR ≤ A(R) ≤ Ce hR . In this particular case it turns out that the constant h > 0 agrees with the mean curvature of the horospheres. Let us finish this section by discussing specific properties of non-compact simply connected harmonic manifolds (X, g) of purely exponential volume growth as defined in (1). In this setting, purely exponential volume growth, Anosov geodesic flow and Gromov hyperbolicity are equivalent properties (see [Kni12]). A geodesic metric space (X, d) is called Gromov hyperbolic if there exists a δ > 0 such that geodesic triangles are δ-thin, that is each side is contained in the δ-tubes of the other two sides. Next we introduce a boundary structure for (X, g) and define a natural topology. The boundary structure is given by equivalence classes of geodesic rays in X, where two rays γ 1 , γ 2 are equivalent if {d(γ 1 (t), γ 2 (t)) : t ≥ 0} is bounded. We denote this boundary by ∂X and the equivalence class associated to a geodesic ray γ by γ(∞) ∈ ∂X. LetX = X ∪∂X. For each x ∈ X, we introduce the following bijective map pr x : B 1 (x) →X, where B 1 (x) ⊂ T x X is the closed ball of radius 1: Then the topology onX is defined such that pr x is a homeomorphism. This definition does not depend on the choice of x and is called the cone topology. We proved in [KP16,Theorem 4.5] that this topology agrees with the Gromov topology onX. Let v = γ ′ 0 (0). Then there exists t 0 ∈ R such that we have d(γ 0 (t + t 0 ), γ(t)) → 0 for t → ∞, This shows the independence of the limit of the choice of geodesic ray. ⋄ The level sets of B ξ,x are called horospheres centered at ξ and their mean curvatures agree with ∆B ξ,x for all ξ ∈ ∂X, x ∈ X. Since they have the same constant mean curvature h ≥ 0, we have In the case of purely exponential volume growth the constant h is positive. The Busemann cocycle B : ∂X × X × X → R is defined by and it is easy to see that it satisfies the following cocycle property: B(x, z, ξ) = B(x, y, ξ) + B(y, z, ξ). Proof: We first assume ξ = η. Since X is Gromov hyperbolic, there exists a geodesic γ : R → X with γ(−∞) = ξ and γ(∞) = η (see, e.g., [DK18,Lemma 11.83]). Using the Anosov property, we conclude that there exist s 0 , t 0 ∈ R such that d(γ 1 (s), γ(−s + s 0 )) → 0 as s → ∞ and d(γ 2 (t), γ(t + t 0 )) → 0 as t → ∞. Using these limits and similar arguments as in the proof of Lemma 2.1 (in particular (2)), we derive lim s,t→∞ We have the following relation between Busemann functions and the Gromov product in our setting (it also holds in any CAT(-1) space): Lemma 2.3. Let X be a noncompact, simply connected harmonic manifold of purely exponential volume growth. For x ∈ X and η ∈ ∂X, let γ x,η : [0, ∞) → X be a geodesic ray with γ x,η (0) = x and γ x,η (∞) = η. Then we have for all ξ ∈ ∂X: Proof: Let α : [0, ∞) → X be a geodesic ray with α(0) = x and α(∞) = ξ. Then by the previous Lemma, the double limit lim s,r→∞ d(α(s), γ x,η (r)) − (r + s) exists and equals −2(ξ|η) x . Since the double limit exists, it can be evaluated as an iterated limit, so we have: Now for a fixed r we have lim s→∞ (d(α(s), γ x,η (r)) − (r + s)) = B ξ,x (γ x,η (r)) − r, so substituting this in the previous equation gives the result. ⋄ Finally, we define the family of visibility measures λ x on harmonic manifolds (X, g) of purely exponential volume growth. For x ∈ X, let θ x denote the normalized canonical measure on T 1 x X induced by the Riemannian metric and λ x be the push forward of θ x to the boundary ∂X under p x . The visibility measures λ x are pairwise absolutely continuous with Radon-Nykodym derivative given by This result was shown in [KP16, Theorem 1.4] in the more general setting of asymptotically harmonic manifolds of purely exponential volume growth with curvature tensor bounds These curvature tensor bounds are satisfied for harmonic manifolds by [Bes78, Propositions 6.57 and 6.68]. Radial and horospherical parts of the Laplacian Let X be a non-compact simply connected harmonic manifold. Let h ≥ 0 denote the mean curvature of horospheres in X, let ρ = 1 2 h, and let A : (0, ∞) → R denote the density function of X. Now let {e i } be an orthonormal basis of T x X, and let γ i be geodesics with γ ′ i (0) = e i . Then where d x denotes the distance function from the point x, while any C ∞ function which is constant on horospheres at ξ ∈ ∂X is of the form f = u • B ξ,x for some C ∞ function u on R. The following proposition says that the Laplacian ∆ leaves invariant these spaces of functions, and describes the action of the Laplacian on these spaces: (1) For u a C ∞ function on (0, ∞), where L H is the differential operator on R defined by the Proposition follows immediately from the previous Lemma. ⋄ Accordingly, we call the differential operators L R and L H the radial and horospherical parts of the Laplacian respectively. It follows from the above proposition that a function f = u • d x radial around x is an eigenfunction of ∆ with eigenvalue σ if and only if u is an eigenfunction of L R with eigenvalue σ. Similarly, a function f = u • B ξ,x constant on horospheres at ξ is an eigenfunction of ∆ with eigenvalue σ if and only if u is an eigenfunction of L H with eigenvalue σ. In particular, we have the following: Proposition 3.3. Let ξ ∈ ∂X, x ∈ X. Then for any λ ∈ C, the function is an eigenfunction of the Laplacian with eigenvalue −(λ 2 + ρ 2 ) satisfying f (x) = 1. Analysis of radial functions As we saw in the previous section, finding radial eigenfunctions of the Laplacian amounts to finding eigenfunctions of its radial part L R . When X is a rank one symmetric space G/K, or more generally a harmonic N A group, then the volume density function is of the form A(r) = C sinh r 2 p cosh r 2 q , for a constant C > 0 and integers p, q ≥ 0, and so the radial part L R = d 2 dr 2 + (A ′ /A) d dr falls into the general class of Jacobi operators L α,β = d 2 dr 2 + ((2α + 1) coth r + (2β + 1) tanh r) d dr for which there is a detailed and well known harmonic analysis in terms of eigenfunctions (called Jacobi functions) [Koo84]. For a general harmonic manifold X, the explicit form of the density function A is not known, so it is unclear whether the radial part L R is a Jacobi operator. However, there is a harmonic analysis, based on hypergroups ( [Che74], [Che79], [Tri81], [Tri97b], [Tri97a], [BX95], [Xu94]), for more general second-order differential operators on (0, ∞) of the form where A is a function on [0, ∞) satisfying certain hypotheses which allow one to endow [0, ∞) with a hypergroup structure, called a Chebli-Trimeche hypergroup. We first recall some basic facts about Chebli-Trimeche hypergroups, and then show that the density function of a harmonic manifold satisfies the hypotheses required in order to apply this theory. Chebli-Trimeche hypergroups. A hypergroup (K, * ) is a locally compact Hausdorff space K such that the space M b (K) of finite Borel measures on K is endowed with a product (µ, ν) → µ * ν turning it into an algebra with unit, and K is endowed with an involutive homeomorphism x ∈ K →x ∈ K, such that the product and the involution satisfy certain natural properties (see [BH95] Chapter 1 for the precise definition). A motivating example relevant to the following is the algebra of finite radial measures on a noncompact rank one symmetric space G/K under convolution; as radial measures can be viewed as measures on [0, ∞), this endows [0, ∞) with a hypergroup structure (with the involution being the identity). It turns out that this hypergroup structure on [0, ∞) is a special case of a general class of hypergroup structures on [0, ∞) called Sturm-Liouville hypergroups (see [BH95], section 3.5). These hypergroups arise from Sturm-Liouville boundary problems on (0, ∞). We will be interested in a particular class of Sturm-Liouville hypergroups called Chebli-Trimeche hypergroups. These arise as follows (we refer to [BH95] for proofs of statements below): A Chebli-Trimeche function is a continuous function A on [0, ∞) which is C ∞ and positive on (0, ∞) and satisfies the following conditions: (H1) A is increasing, and A(r) → +∞ as r → +∞. (H3) For r > 0, A(r) = r 2α+1 B(r) for some α > −1/2 and some even, Let L be the differential operator on C 2 (0, ∞) defined by equation (4), where A satisfies conditions (H1)-(H3) above. Define the differential operator l on C 2 ((0, ∞) 2 ) by For f ∈ C 2 ([0, ∞)) denote by u f the solution of the hyperbolic Cauchy problem for all even, C ∞ functions f on R. We have ǫ x * ǫ y = ǫ y * ǫ x for all x, y, and the product (ǫ x , ǫ y ) → ǫ x * ǫ y extends to a product on all finite measures on [0, ∞) which turns [0, ∞) into a commutative hypergroup ([0, ∞), * ) (with the involution being the identity), called the Chebli-Trimeche hypergroup associated to the function A. Any hypergroup has a Haar measure, which in this case is given by the measure A(r)dr on [0, ∞). For a commutative hypergroup K with a Haar measure dk, a Fourier analysis can be carried out analogous to the Fourier analysis on locally compact abelian groups. There is a dual spaceK of characters, which are bounded multiplicative functions on the hypergroup χ : for all x, y ∈ K. For f ∈ L 1 (K), the Fourier transform of f is the functionf onK defined byf The Levitan-Plancherel Theorem states that there is a measure dχ onK called the Plancherel measure, such that the mapping f →f extends from L 1 (K) ∩ L 2 (K) to an isometry from L 2 (K) onto L 2 (K). The inverse Fourier transform of a function σ ∈ L 1 (K) is the functionσ on K defined by For the Chebli-Trimeche hypergroup, it turns out that the multiplicative functions on the hypergroup are given precisely by eigenfunctions of the operator L. For any λ ∈ C, the equation has a unique solution φ λ on (0, ∞) which extends continuously to 0 and satisfies φ λ (0) = 1 (note that the coefficient A ′ /A of the operator L is singular at r = 0 so existence of a solution continuous at 0 is not immediate). The function φ λ extends to a C ∞ even function on R. Since equation (5) reads the same for λ and −λ, by uniqueness we have φ λ = φ −λ . The multiplicative functions on [0, ∞) are then exactly the functions φ λ , λ ∈ C. The functions φ λ are bounded if and only if | Im λ| ≤ ρ. Furthermore, the involution on the hypergroup being the identity, the characters of the hypergroup are realvalued, which occurs for φ λ if and only if λ ∈ R ∪ iR. Thus the dual space of the hypergroup is given byK The hypergroup Fourier transform of a function f ∈ L 1 ([0, ∞), A(r)dr) is given byf for λ ∈ Σ (when the hypergroup arises from convolution of radial measures on a rank one symmetric space G/K, then this is the well-known Jacobi transform [Koo84]). The Levitan-Plancherel and Fourier inversion theorems for the hypergroup give the existence of a Plancherel measure σ on Σ such that the Fourier transform defines an isometry from L 2 ([0, ∞), A(r)dr) onto L 2 (Σ, σ), and, for any function f ∈ L 1 ([0, ∞), A(r)dr) ∩ C([0, ∞)) such thatf ∈ L 1 (Σ, σ), we have for all r ∈ [0, ∞). In [BX95], it is shown that under certain extra conditions on the function A, the support of the Plancherel measure is [0, ∞) and the Plancherel measure is absolutely continuous with respect to Lebesgue measure dλ on [0, ∞), given by where C 0 > 0 is a constant, and c is a certain complex function on C − {0}. The required conditions on A are as follows: Making the change of dependent variable v = A 1/2 u, equation (5) becomes where the function G is defined by If the function G tends to 0 fast enough near infinity, then it is reasonable to expect that equation (6) above has two linearly independent solutions asymptotic to exponentials e ±iλr near infinity. Bloom-Xu show that this is indeed the case [BX95] under the following hypothesis on the function G: We will call this function the c-function of the hypergroup. We remark that if the hypergroup ([0, ∞), * ) is the one arising from convolution of radial measures on a noncompact rank one symmetric space G/K, then this function agrees with Harish-Chandra's c-function only on the half-plane {Im λ ≤ 0} and not on all of C. If we furthermore assume the hypothesis |α| = 1/2, then Bloom-Xu show that the function c is non-zero for Im λ ≤ 0, λ = 0, and prove the following estimates: There exist constants C, K > 0 such that Moreover they prove the following inversion formula: for any even function f ∈ It follows that the Plancherel measure σ of the hypergroup is supported on [0, ∞), and absolutely continuous with respect to Lebesgue measure, with density given by C 0 |c(λ)| −2 . Bloom-Xu also show that the c-function is holomorphic on the half-plane {Im λ < 0}. 4.2. The density function of a harmonic manifold. Let X be a simply connected, n-dimensional harmonic manifold of purely exponential volume growth, and let A be the density function of X. We check that A is a Chebli-Trimeche function, so that we obtain a commutative hypergroup ([0, ∞), * ), and that the conditions of Bloom-Xu are met so that the Plancherel measure is given by C 0 |c(λ)| −2 dλ on [0, ∞). The function A(r) equals, up to a constant factor, the volume of geodesic spheres S(x, r), which is increasing in r and tends to infinity as r tends to infinity, so condition (H1) is satisfied. As stated in section 2.2, the function A ′ (r)/A(r) equals the mean curvature of geodesic spheres S(x, r), which decreases monotonically to a limit h = 2ρ which is positive (and equals the mean curvature of horospheres), so condition (H2) is satisfied. Fixing a point x ∈ X, for r > 0, the density function A(r) is given by the Jacobian of the map φ : v → exp x (rv) from the unit tangent sphere T 1 x X to the geodesic sphere S(x, r). Let T be the map v → rv from the unit tangent sphere T 1 x X to the tangent sphere of radius r, T r x X ⊂ T x M , then φ = exp x •T , so the Jacobian of φ is given by the product of the Jacobians of T and exp x , hence where the function B is given by where v is any fixed vector in T 1 x X. Since B is independent of the choice of v, in particular is the same for vectors v and −v, the function B is even, and C ∞ on R with B(0) = 1. Thus condition (H3) holds for the function A, with α = (n − 2)/2. The density function A is thus a Chebli-Trimeche function, so we obtain a hypergroup structure on [0, ∞), which we call the radial hypergroup of the harmonic manifold X (the reason for this terminology will become clear from the the following sections). We proceed to check that condition (H4) is satisfied. For this we will need the following theorem of Nikolayevsy: Theorem 4.1. [Nik05] The density function of a harmonic manifold is an exponential polynomial, i.e. a function of the form (p i (r) cos(β i r) + q i (r) sin(β i r))e αir where p i , q i are polynomials and α i , β i ∈ R, i = 1, . . . , k. It will be convenient to rearrange terms and write the density function in the form where α 1 < α 2 < · · · < α l , and each f ij is a trigonometric polynomial, i.e. a finite linear combination of functions of the form cos(βr) and sin(βr), β ∈ R, with f imi not identically zero, for i = 1, . . . , l. For an exponential polynomial written in this form, we will call the largest exponent α l which appears in the exponentials the exponential degree of the exponential polynomial. Lemma 4.2. With the density function as above, we have α l = 2ρ, m l = 0 and f l0 = C for some constant C > 0. Thus the density function is of the form where P is an exponential polynomial of exponential degree δ < 2ρ. Proof: Recall that X has purely exponential volume growth, i.e. there exists a constant C > 1 such that for all r ≥ 1. If α l < 2ρ, then A(r)/e 2ρr → 0 as r → ∞, contradicting (9) above, so we must have α l ≥ 2ρ. On the other hand, if α l > 2ρ, then since f lm l is a trigonometric polynomial which is not identically zero, we can choose a sequence r m tending to infinity such that f lm l (r m ) → α = 0. Then clearly A(r m )/e 2ρrm → ∞, again contradicting (9). Hence α l = 2ρ. Using (8) and α l = 2ρ, we have as r → ∞ since f lm l is bounded and A ′ (r)/A(r) − 2ρ → 0 as r → ∞. Thus f ′ lm l is a trigonometric polynomial which tends to 0 as r → ∞, so it must be identically zero, hence f lm l = C for some non-zero constant C. Proof: By the previous lemma, A(r) = Ce 2ρr + P (r), where P is an exponential polynomial of exponential degree δ < 2ρ. We then have where Q is an exponential polynomial of exponential degree less than or equal to δ. Putting α = (2ρ − δ)/2, it follows that A ′ (r)/A(r) − 2ρ = O(e −αr ) as r → ∞. Now we can write the function G as Since (A ′ (r)/A(r) + 2ρ) is bounded, it follows from the previous paragraph that G(r) = O(e −αr ) as r → ∞. This immediately implies that condition (H4) holds. ⋄ In order to apply the result of Bloom-Xu on the Plancherel measure for the hypergroup, it remains to check that |α| = 1/2. Since α = (n − 2)/2, this means n = 3. Now the Lichnerowicz conjecture holds in dimensions n ≤ 5 ( [Lic44], [Wal48], [Bes78], [Nik05]), i.e. the only harmonic manifolds in such dimensions are the rank one symmetric spaces X = G/K, for which as mentioned earlier the Jacobi analysis applies, and the Plancherel measure of the hypergroup is well known to be given by C 0 |c(λ)| −2 dλ where c is Harish-Chandra's c-function. Thus in our case we may as well assume that X has dimension n ≥ 6, so that |α| = 1/2, and we may then apply the results of Bloom-Xu stated in the previous section. 4.3. The spherical Fourier transform. Let φ λ denote as in section 4.1 the unique function on [0, ∞) satisfying L R φ λ = −(λ 2 +ρ 2 )φ λ and φ λ (0) = 1. For x ∈ X let d x denote as before the distance function from the point x, d x (y) = d(x, y). We define the following eigenfunction of ∆ radial around x: The uniqueness of φ λ as an eigenfunction of L R with eigenvalue −(λ 2 + ρ 2 ) and taking the value 1 at r = 0 immediately implies the following lemma: Lemma 4.4. The function φ λ,x is the unique eigenfunction f of ∆ on X with eigenvalue −(λ 2 + ρ 2 ) which is radial around x and satisfies f (x) = 1. Note that for λ ∈ R, the functions φ λ,x are bounded. Let dvol denote the Riemannian volume measure on X. L 1 ([0, ∞), A(r)dr). In that case, again integrating in polar coordinates giveŝ whereû is the hypergroup Fourier transform of the function u. Moreover f ∈ C ∞ c (X) if and only if u extends to an even function on R such that u ∈ C ∞ c (R). Applying the Fourier inversion formula of Bloom-Xu for the radial hypergroup stated in section 4.1 to the function u then leads immediately to the following inversion formula for radial functions: Theorem 4.6. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth and f ∈ C ∞ c (X) be radial around the point x ∈ X. Then for all y ∈ X. Here c denotes the c-function of the radial hypergroup and C 0 > 0 is a constant. Moreover, the c-function is holomorphic on the half-plane {Im λ < 0}. Proof: As shown in the previous section, all the hypotheses required to apply the inversion formula of Bloom-Xu are satisfied, hence For the holomorphicity of the function c in {Im λ < 0} see the proof of Proposition 3.17 in [BX95]. ⋄ The Plancherel theorem for the radial hypergroup leads to the following: Theorem 4.7. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth. Let L 2 x (X, dvol) denote the closed subspace of L 2 (X) consisting of those functions in L 2 (X) which are radial around the point x. For f ∈ L 1 (X, dvol) ∩ L 2 x (X, dvol), we have The spherical Fourier transform f →f extends to an isometry from L 2 x (X, dvol) onto L 2 ([0, ∞), C 0 |c(λ)| −2 dλ). Fourier inversion and Plancherel theorem As before, we assume in this section that (X, g) denotes a simpy connected harmonic manifold of purely exponential volume growth unless stated otherwise. We proceed to the analysis of non-radial functions on X. Our definition of Fourier transform will depend on the choice of a basepoint x ∈ X. for λ ∈ C, ξ ∈ ∂X. Here as before B ξ,x denotes the Busemann function at ξ based at x such that B ξ,x (x) = 0. Using the formula for points o, x ∈ X, we obtain the following relation between the Fourier transforms based at two different basepoints o, x ∈ X: The key to passing from the inversion formula for radial functions of section 4.3 to an inversion formula for non-radial functions will be a formula expressing the radial eigenfunctions φ λ,x as an integral with respect to ξ ∈ ∂X of the eigenfunctions e (iλ−ρ)B ξ,x (Theorem 5.6). This will be the analogue of the well-known formulae for rank one symmetric spaces G/K and harmonic N A groups expressing the radial eigenfunctions φ λ,x as matrix coefficients of representations of G on L 2 (K/M ) and N A on L 2 (N ) respectively. We start with a basic relation between eigenfunctions of the Laplacian: Lemma 5.2. Let x ∈ X and ξ ∈ ∂X. Then for all λ ∈ C, where M x is the radialisation operator around the point x). In particular, φ λ,x (y) is entire in λ for fixed y ∈ X, and is real and positive for λ such that (iλ − ρ) is real and positive. Proof: Since the function e (iλ−ρ)B ξ,x is an eigenfunction of the Laplacian ∆ with eigenvalue −(λ 2 + ρ 2 ) and the operator M x commutes with ∆, the function f = M x (e (iλ−ρ)B ξ,x ) is also an eigenfunction of ∆ for the eigenvalue −(λ 2 + ρ 2 ). Since f is radial around x and f (x) = 1, it follows from Lemma 4.4 that f = φ λ,x . ⋄ The next proposition provides a connection between the Fourier transform and the spherical Fourier transform for radial functions: c (X) be radial around the point x ∈ X. Then the Fourier transform of f based at x coincides with the spherical Fourier transform, for all λ ∈ C, ξ ∈ ∂X. Proof: where σ r is normalized surface area measure on the geodesic sphere S(x, r). Evaluating the integral definingf x in geodesic polar coordinates centered at x we havẽ Now we need to define the visibility measures on the boundary ∂X: Given a point x ∈ X, let θ x be normalized canonical measure on the unit tangent sphere T 1 x X, i.e. the unique probability measure on T 1 x X invariant under the orthogonal group of the tangent space T x X. For v ∈ T 1 x X, let γ v : [0, ∞) → X be the unique geodesic ray with initial velocity v. Then we have a homeomorphism pr x : T 1 x X → ∂X, v → γ v (∞). The visibility measure on ∂X (with respect to the basepoint x) is defined to be the push-forward (pr x ) * θ x of λ x under the map pr x . For λ ∈ C and x ∈ X, define the functionφ λ,x on X bỹ It follows from the above equation thatφ λ,x (y) is entire in λ for fixed y ∈ X, and is real and positive for λ such that (iλ − ρ) is real and positive. Moreover, by Proposition 3.3, the functionφ λ,x is an eigenfunction of the Laplacian ∆ with eigenvalue −(λ 2 + ρ 2 ), andφ λ,x (x) = 1. Our next aim is to show thatφ λ,x is radial around x and, therefore, agrees with the function φ λ,x introduced in Lemma 4.4. We start with a crucial property of non-compact harmonic manifolds without any further assumptions, derived from a result of Szabo [Sza90] that the volume of the intersection of a metric ball B(x, r 1 ) with a geodesic sphere S(y, r 2 ) depends only on the radii r 1 , r 2 and the distance d = d(x, y) of their centers. We will therefore denote this volume by v(r 1 , r 2 , d). Proposition 5.4. Let (X, g) be a non-compact simply connected harmonic manifold. For v ∈ T 1 x X and r > 0, let b r v (y) = d(y, γ v (r)) − r, and θ x be the normalized canonical measure of T 1 x X. Then for every continuous function φ : R → C, the function is radial around x. Proof: Let ψ(s) = φ(s − r). Then Next, we consider the following expression: On the other hand, we have Now, we combine (12) and (13) and differentiate with respect to r and obtain A(r) In view of (11), this implies that which is obviously independent of the position of y within the sphere S(R, x) with R = d(x, y). This shows that the function F is radial around x. ⋄ The analogous statement for Busemann functions is obtained via a limiting argument: Corollary 5.5. Let (X, g) be a non-compact simply connected harmonic manifold and φ : R → C be a continuous function. Then the function is a radial function around x. Proof: Note that we have pointwise convergence φ(b r v (y)) → φ(b v (y)) for r → ∞ and, since |b r v (y)| ≤ d(x, y) for all r ≥ 0, we can apply Lebesgue's dominated convergence. ⋄ Theorem 5.6. Let (X, g) be a non-compact simply connected harmonic manifold. Let λ ∈ C and x ∈ X. Then for all y ∈ X. Proof: Both sides are eigenfunctions of the Laplacian ∆ with eigenvalue −(λ 2 +ρ 2 ). Moreover, both sides assume the value 1 as y = x. φ λ,x is radial around x, by definition, and the right hand side is radial by Corollary 5.5 with φ(s) = e iλ−ρ)s . Therefore, both expressions agree by the uniqueness of radial solutions of ∆u = −(λ 2 + ρ 2 )u, u(x) = 1. ⋄ We can now prove the Fourier inversion formula: Theorem 5.7. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth. Fix a basepoint o ∈ X. Then for f ∈ C ∞ c (X) we have . Proof: Given f ∈ C ∞ c (X) and x ∈ X, the function M x f is in C ∞ c (X), is radial around the point x and satisfies (M x f )(x) = f (x). By Theorem 4.6 applied to the function M x f we have (since φ λ,x (x) = 1). Now using the formal self-adjointness of the operator M x , Theorem 5.6, the fact that φ λ,x is radial around x and φ λ,x = φ −λ,x we obtain Using the relations (10), namelỹ we get Substituting this last expression for M x f (λ) in the equation The Fourier inversion formula leads immediately to a Plancherel theorem: Theorem 5.8. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth. Fix a basepoint o ∈ X. For f, g ∈ C ∞ c (X), we have where C 0 is the constant appearing in the Fourier inversion formula. Proof: Applying the Fourier inversion formula to the function g gives Taking f = g gives that the Fourier transform preserves L 2 norms, for all f ∈ C ∞ c (X). It follows from a standard argument that the Fourier transform extends to an isometry of L 2 (X, dvol) into L 2 ([0, ∞) × ∂X, C 0 |c(λ)| −2 dλdλ o (ξ)). ⋄ An integral formula for the c-function In this section we prove the following identity which can be viewed as an analogue of a well-known integral formula for Harish-Chandra's c-function (formula (18) in [Hel94], pg. 108): Theorem 6.1. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth and c be the c-function of the radial hypergroup of X. Let Im λ < 0. Then we have for any x ∈ X, ξ ∈ ∂X, where (ξ|η) x is the Gromov product given in Lemma 2.2. For the proof of this identity we need some preparations. Recall that a geodesic metric space (X, d) is called δ-hyperbolic if geodesic triangles are δ-thin, that is each side is contained in the δ-tubes of the other two sides. Moreover, the Gromov product (y|z) x , given by satisfies the following straightforward consequence of the triangle inequality: Let γ be a geodesic joining y, z ∈ X. Then for any point w on this geodesic γ we have This inequality entends to the boundary: for all points w on any geodesic connecting ξ, η ∈ ∂X. We use the Gromov product to define balls in the boundary ∂X with center ξ ∈ ∂X and radius r > 0: Note that these "balls" do not come from a metric but from the Gromov product. We need the following geometric result. So dominated convergence applies and we conclude that as r → ∞. This shows the equation Since c(λ) is holomorphic for Im λ < 0, we need to show that the right hand side is also holomorphic for Im λ < 0. Then both expressions must be equal for Im λ < 0, finishing the proof of the theorem. Since e −2(iλ−ρ)(ξ|η)x is holomorphic for all λ ∈ C, we need to show that for Im λ < 0. Then this expression is holomorphic for Im λ < 0 by Morera's Theorem. Let λ = σ − iτ with σ ∈ R and τ > 0. Then we have If τ ≥ ρ then the set {η ∈ ∂X | e −2(τ −ρ)(ξ|η)x > t} is empty for t > 1, and so the last integral reduces to an integral over [0, 1], which is bounded above by one since λ x is a probability measure. Since X is of purely exponential volume growth, it is a δ-hyperbolic space for some δ > 0 ( [Kni12]). For 0 < τ < ρ using Lemma 6.3 and the fact that λ x is a probability measure we obtain with h = 2ρ The convolution algebra of radial functions In this section, we assume (X, g) to be a non-compact simply connected harmonic manifold without any further assumption unless stated otherwise. Fix a basepoint o ∈ X. We define a notion of convolution with radial functions as follows: For a function f radial around the point o, let f = u • d o , where u is a function on R. For x ∈ X, the x-translate of f is defined to be the function Note that if f ∈ L 1 (X, dvol), then evaluating integrals in geodesic polar coordinates centered at o and x gives ||f || 1 = ∞ 0 |u(r)|A(r)dr = ||τ x f || 1 Definition 7.1. For f an L 1 function on X and g an L 1 function on X which is radial around the point o, the convolution of f and g is the function on X defined by = ||f || 1 ||g|| 1 < +∞ so that the integral defining (f * g)(x) exists for a.e. x, and f * g ∈ L 1 (X, dvol). Theorem 7.2. Let (X, g) be a non-compact simply connected harmonic manifold. Let L 1 o (X, dvol) denote the closed subspace of L 1 (X, dvol) consisting of those L 1 functions which are radial around the point o. Then for f, g ∈ L 1 o (X, dvol) we have f * g ∈ L 1 o (X, dvol), and L 1 o (X, dvol) forms a commutative Banach algebra under convolution. Proof: We first consider functions f, g ∈ C ∞ c (X) which are radial around o. It was shown in [PS15, Lemma 2.8] that f * g is again radial around o and it follows from [PS15, Remark 1, p.127] that f * g = g * f . Now the inequality ||f * g|| 1 ≤ ||f || 1 ||g|| 1 implies, by the density of smooth, compactly supported radial functions in the space L 1 o (X, dvol), that for f, g ∈ L 1 o (X, dvol) we have f * g = g * f ∈ L 1 o (X, dvol), so L 1 o (X, dvol) forms a commutative Banach algebra under convolution. ⋄ Now we derive a basic identity about the Fourier transform of a convolution. We assume here additionally that (X, g) is of purely exponential volume growth to guarantee the existence of the Fourier transform. Note if f, g ∈ C ∞ c (X) with g = u • d o radial around o, then f * g is compactly supported. where we have used the fact that for the function u • d y which is radial around y we have u • d y y (λ, ξ) =û(λ) =ĝ(λ) whereû is the hypergroup Fourier transform of u andĝ is the spherical Fourier transform of the function g which is radial around o. Finally, we remark that the radial hypergroup of a harmonic manifold (X, g) of purely exponential volume growth can be realized as the convolution algebra of finite radial measures on the manifold: convolution with radial measures can be defined, and the convolution of two radial measures is again a radial measure. This can be proved by approximating finite radial measures by L 1 radial functions and applying the Theorem 7.2. The convolution algebra L 1 o (X, dvol) is then identified with a subalgebra of the hypergroup algebra of finite radial measures under convolution. The Kunze-Stein phenomenon In this section we assume that (X, g) is a simply connected harmonic manifold of purely exponential volume growth and we prove a version of the Kunze-Stein phenomenon: for 1 ≤ p < 2, convolution with a radial L p -function defines a bounded operator on L 2 (X). For t = 0, applying Hölder's inequality we have, for any ǫ > 0, from which it follows that by choosing ǫ small enough so that q/(1 + ǫ) > 2 we have ||φ 0,x || q < +∞. ⋄ We remark that while the spherical Fourier transform was originally defined for radial L 1 functions, after fixing a basepoint x ∈ X it can also be defined for general L 1 functions by the same formulâ g(λ) := X g(y)φ λ,x (y)dvol(y) , λ ∈ R We then have the following Lemma: Lemma 8.2. Let x ∈ X, let 1 ≤ p < 2 and let g be an L p -function on X. Let q > 2 be such that 1 p + 1 q = 1. Then the spherical Fourier transformĝ of g extends to a holomorphic function of λ on the strip S q := {| Im λ| < γ q ρ}, and is bounded on any closed sub-strip {| Im λ| ≤ t} for 0 < t < γ q ρ. In particularĝ on R satisfies a bound ||ĝ|| ∞ ≤ C p ||g|| p for a constant C p > 0. Proof: Given 0 < t < γ q ρ, for any λ ∈ C with | Im λ| ≤ t, by the previous Lemma ||φ λ,x || q ≤ C for some constant C only depending on q and t, so it follows from Holder's inequality that the function g(λ) := X g(y)φ λ,x (y)dvol(y) is well-defined and bounded for | Im λ| ≤ t by a constant C q,t times ||g|| p . The holomorphicity of the functionĝ follows from Morera's theorem, using the holomorphic dependence of φ λ,x on λ. ⋄ We can now prove the following version of the Kunze-Stein phenomenon: Theorem 8.3. Let (X, g) be a simply connected harmonic manifold of purely exponential volume growth. Let x ∈ X and let 1 ≤ p < 2. Let g ∈ C ∞ c (X) be radial around the point x. Then for any f ∈ C ∞ c (X) we have ||f * g|| 2 ≤ C p ||g|| p ||f || 2 for some constant C p > 0. It follows that for any g ∈ L p (X) radial around x, the map f ∈ C ∞ c (X) → f * g extends to a bounded linear operator on L 2 (X) with operator norm at most C p ||g|| p . Proof: Recall that for f, g ∈ C ∞ c (X) with g radial around x, the Fourier transform of a convolution satisfies for λ ∈ R, ξ ∈ ∂X. Applying the Plancherel theorem and Lemma 8.2 above, we have ||f * g|| 2 = || f * g x || 2 = ||f xĝ || 2 ≤ ||ĝ|| ∞ ||f x || 2 ≤ C p ||g|| p ||f || 2 The above inequality, valid for C ∞ c -functions, implies by a standard density argument that for any L p radial function g, the map f ∈ C ∞ c (X) → f * g extends to a bounded linear operator on L 2 (X) with norm at most C p ||g|| p . ⋄
2019-05-08T19:07:47.000Z
2019-05-08T00:00:00.000
{ "year": 2019, "sha1": "b1a32ab03ad7433c0310ca8c559e1ef5bccd9a6a", "oa_license": null, "oa_url": "https://dro.dur.ac.uk/28856/1/28856.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b1a32ab03ad7433c0310ca8c559e1ef5bccd9a6a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
218776910
pes2o/s2orc
v3-fos-license
The Properties of Generalized Collision Branching Processes We consider basic properties regarding uniqueness, extinction, and explosivity for the Generalized Collision Branching Processes (GCBP). Firstly, we investigate some important properties of the generating functions for GCB q -matrix in detail. Then for any given GCB q -matrix, we prove that there always exists exactly one GCBP. Next, we devote to the study of extinction behavior and hitting times. Some elegant and important results regarding extinction probabilities, the mean extinction times, and the conditional mean extinction times are presented. Moreover, the explosivity is also investigated and an explicit expression for mean explosion time is established. Introduction In this paper, we mainly consider extinction and explosivity for the Generalized Collision Branching Processes (GCBP). e particles in the system that evolves can be described as follows. Collisions between particles occur at random, and whenever m particles collide, they are removed and replaced by j "offsprings" with probability p j (j ≥ 0), independently of other collisions. In any small time interval (t, t + Δt), there is a positive probability θΔt + o(Δt) that a collision occurs, and the chance of 2 or more collisions occurring in that time interval is o(Δt). Assume that there are i particles present at time t and all interactions are equally likely. en, there will be j particles with probability i m θp j− i+m Δt + o(Δt) after time Δt. In this paper, we take X(t) be the number of particles present at time t and therefore X(t) to be a continuous-time Markov chain with nonzero transition rates q ij � i m b j− i+m ,j ≥ i − m, i≥ m, where b m �− θ(1− p m ) and b j � θp j for j≠ m. is leads us to the following formal definition. Definition 1. A q-matrix Q � (q ij ; i, j ∈ Z + ) is called a generalized collision branching q-matrix (henceforth referred to as a GCB q-matrix) if it takes the following form: together with b k > 0(k � 0, 1, . . ., m − 1). e conditions b 0 > 0 and ∞ j�m+1 b j > 0 are essential, while condition b k > 0(k � 1, . . ., m − 1) is imposed for convenience; all our conclusions hold true with some minor and obvious adjustments if this latter condition is removed. Guided by this fact, we formally define this generalized collision branching process as follows. Definition 2. A generalized collision branching process (henceforth referred to simply as a GCBP) is a continuous-time Markov chain, taking values in Z + , whose transition function P(t) � (p ij (t); i, j ∈ Z + ) satisfies the forward equation where Q is a GCB q-matrix as defined in (1) and (2). In order to avoid discussing some trivial cases, we shall assume that Z + is an irreducible class for our q-matrix Q as well as for the corresponding Feller minimal Q-function throughout this paper excepting where we consider the absorbing case. e more general jump rates will be discussed in subsequence papers. e structure of this paper is as follows. Some preliminary results are obtained in Section 2. In Section 3, we show that there always exists exactly one GCBP for a given GCB q-matrix Q. And then the extinction behavior and hitting times are considered in the Section 4, where some elegant and important results regarding extinction probabilities and mean extinction times and explosion times are obtained. Preliminaries In order to investigate properties of GCBPs, we introduce the generating function B(s) of the sequence {b k ; k ≥ 0} in (1) and (2) as e function plays an extremely important role in the following discussion. It is easy to see that It is clear that B′(1) > − ∞. Moreover, the number of solutions to equation B(s) � 0 in s ∈ [0, 1) is determined by the sign of B′(1), and we will give the simple results in the following. However, their proofs are obvious and thus omitted in this paper. Lemma 2. Suppose that Q is a GCB q-matrix as defined in or equivalently where Proof. By the Kolmogorov forward equation (3), for any i, j ≥ 0, Multiplying s j on both sides of the above equality and summing over j ∈ Z + , we immediately obtain (6). Finally, (7) is the Laplace transform of (6). Uniqueness In this section, we mainly consider the uniqueness of GCBPs. be the Feller minimal Q-function and Q-resolvent, respectively, where Q is a GCB q-matrix. en for any i ≥ m and |s| < 1, we have Proof. It is easily seen that all states G � {m, m + 1, . . ., } are transient, and thus, (i) follows. is simple fact can also be easily obtained analytically. Indeed, by Kolmogorov forward equation, we have which implies that We now prove (9). Firstly, we know that the Feller minimal Q-resolvent can be obtained by the following (Laplace transform version) forward integral iteration: and that ϕ (n) ij (λ) ↑ ϕ ij (λ) as n ⟶ ∞ for all i, j ∈ E. Now, we consider our GCB q-matrix Q on Z + , and we still denote ϕ (n) ij (λ), i, j ∈ Z + to be the corresponding Feller minimal resolvent. Firstly, we claim that for any n ≥ 0, For j < m, (15) is trivially true, so we assume j ≥ m. We use mathematically induction on n to prove the conclusion. Obviously, it is true for n � 0. Next, by (14), we can easily get that Define Applying the notation, (16) can be rewritten as By (14), It follows from the above two expressions that and so (15) follows from the induction principle. Also, letting s ↑ 1 in (20) yields that However, it is easily seen that and thus, by (18), we have It follows from the Dominated Convergence eorem and (20) yields that for 0 < s < 1, Letting n ⟶ ∞ in (18) and applying the above equality leads to the conclusion that for 0 < s < 1, However, for all 0 < 1 − ε ≤ s < 1, we may find an ε > 0 such that B(s) ≠ 0. us, Mathematical Problems in Engineering Applying the Monotone Convergence eorem and Dominated Convergence eorem yields It easy to see that the above equality holds for all 0 < s < 1. us, (12) yields from (25). Moreover, (10) is the Laplace transform of (12), which implies that (10) holds for almost all t ≥ 0. Furthermore, note that the left-hand side of (11) is a continuous function of t > 0; thus, (10) holds for all t ≥ 0. Proof. Firstly, we suppose that B′(1) ≤ 0 and let P(t) � {p ij (t), i, j ≥ 0} be the minimal Q-transition function. Substituting (1) into (3) gives It easily yields that for 0 ≤ s < 1, the right-hand side being strictly positive for s ∈ (0, 1) follows from the Lemma 1. Moreover, it is easy to dictate that for all t ≥ 0, where q i : � − q ii � i m b m < ∞. erefore, the series ∞ j�0 p ij ′ (t)s j converges uniformly on [0, ∞) for every s ∈ [0, 1), and since the derivatives p ij ′ (t) are all continuous, the derivative of ∞ j�0 p ij (t)s j exists and equals ∞ j�0 p ij (t)s j . us, we may integrate (29) to obtain Letting s ↑ 1 in (31) yields ∞ j�0 p ij (t) ≥ 1, which implies that the equality holds for all i ≥ 0. erefore, the minimal Qtransition function is honest, and hence, Q is regular. Conversely, by the eorem 3.6 of Li and Chen [9], it is easy to obtain the conclusion since ∞ k� m 1/ k m < ∞. e proof is complete. By eorem 1, we can see that if B′(1) ≤ 0, then the GCBP is regular. In the sequel, we will prove that for any given GCB q-matrix Q, there always exists exactly one Q-process satisfying the Kolmogorov forward equation (3). Proof. It follows from eorem 1, we only need to consider the case 0 < B′(1) ≤ +∞. In order to prove the uniqueness of the GCBP, we will verify Reuter's condition, i.e., we need to prove that the equation has only the trivial solution, and then cover all λ > 0. Let Y � (y i ; i ≥ 0) be a nontrivial solution corresponding to λ � 1, then y 0 > 0 and by (32), It is clear that the nontriviality of the solution η implies that ∞ j�0 η j is well defined for all s ∈ [0, 1] since (34) holds, which implies that because it follows from the root test, these series have the same radius of convergence. Applying Fubini's theorem together with (33) and (36) yields that Extinction and Explosion From the previous section, we have obtained that the GCBP is uniquely determined by its q-matrix, so we will examine some of its properties in this section. Let {X(t), t ≥ 0} be the unique GCBP, and denote P(t) � {p ij (t), i, j ≥ 0} be its transition function. Define the extinction times τ k for k � 0, 1, . . ., m − 1 as and denote the corresponding extinction probabilities by 4 Mathematical Problems in Engineering and the overall extinction probability by a k � P(τ < ∞ | X(0) � i) � m− 1 k�0 a ik . Also let E i (·) denote the expectation conditional on X(0) � i. We now prove (42). It follows from Lemma 1 that we have q < 1 since 0 < B′(1) ≤ ∞. Putting s � q in (11) and noting that B(q) � 0, we discover that ∞ j�0 p ij ′ (t)q j � 0 for any t > 0, implying that ∞ j�0 t 0 p ij ′ (u)du · q j � 0. us, for any t > 0, Letting t ⟶ ∞, we have Noting that all of the limits exist, we may apply the Dominated Convergence eorem in the last term on the left-hand side to obtain (42) since q < 1. By eorem 3, we know that the process is absorbed with probability less than 1 if 0 < B′(1) ≤ +∞. Our next result establishes that the process must explode if absorption does not occur in such cases. Theorem 4. For the Feller minimal GCBP, (1 − y) m− 1 a i0 + a i1 y + · · · + a im− 1 y m− 1 − y i B(y) dy. (45) Proof. It follows from (10), for all s ∈ [0, 1), we have i.e., e apparent singularity at s � q on the left-hand side is removable, because the series on the right-hand side certainly converges for all s ∈ [0, 1). Moreover, the left-hand side is continuous and strictly positive (indeed increasing) on this interval. erefore, integrating (48) with respect to s iteration m times and applying Fubini's theorem yields that for any s ∈ [0, 1), Letting s ↑ 1 in (49), we can get that the equality (49) also holds for s � 1, and en the proof is complete if (46) holds since Lemma 4. Let (p ij (t), i, j ∈ Z + ) and (ϕ ij (λ), i, j ∈ Z + ) be the Feller minimal Q-function and Q-resolvent where Q is a GCB q-matrix. (i) For any i, k ≥ m, and hence, considering the integrand is nonnegative, we obtain that Proof. By (10), we have Letting t ⟶ ∞ in the equality (55) for s ∈ (− 1, 1), applying the Dominated Convergence eorem on the lefthand side and the Monotone Convergence eorem on the right-hand side, we obtain (53) by the uniqueness of the Taylor expansion. Furthermore, (53) implies (54) is trivial, and hence, the proof is complete. Proof. It is easily seen from eroem 3 and Lemma 1 that if 0 < B′(1) ≤ ∞, then m− 1 k�0 a ik < 1 which implies E i (τ) � +∞, so let us assume that B′(1) ≤ 0. For these latter cases, it follows from (55) and applying the Monotone Convergence eorem yields us, the proof is complete. It is easily seen that E i (τ k ) � +∞(i ≥ m, k � 0, 1, . . ., m − 1) when the extinction is not certain. Under these circumstances, it is natural to consider the conditional expected extinction times, given by where μ ik (k ≤ m − 1) satisfy the linear equations Proof. First we consider the case 0 < B′(1) ≤ +∞, and thus, 0 < q 0 < 1, and |q i | < 1 for j � 1, . . ., m − 1, applying the eorem 3 together with k�0 p ik (t)q k � q i yields the expression On integrating (60) and using Noting that |q j | < 1 for j � 1, . . ., m − 1, letting t ⟶ ∞ and applying the monotone convergence theorem yields On the other hand, by the definition of τ, , and then all of the conclusions follow since |q i | < 1 for j � 1, . . ., m − 1. From now on, we will consider the explosion probabilities and expected explosion times. By eorem 1, we only need to consider the case that 0 < B′(1) ≤ ∞. Denote τ ∞ be the explosion time and let a i∞ � P(τ ∞ | X(0) � i) be the probability of explosion starting in state i. Since we are aiming at the minimal process, p i∞ (t): � 1 − ∞ j�0 p ij (t) � P(τ ∞ ≤ t | X(0) � i) is the probability of explosion by time t starting in state i, and p i∞ (t) ⟶ a i∞ as t ⟶ ∞. Finally, we consider the time spent in each state over the lifetime of the process. Let T k be the total time spent in state k(k ≥ m) and let μ ik � E i (T k )(i ≥ m). en, is quantity was evaluated in (29). We have therefore the following result. Theorem 8. All of μ ik (i ≥ m, k ≥ m) are finite and given by Data Availability Not applicable. Conflicts of Interest e authors declare that they have no conflicts of interest.
2020-04-23T09:13:38.887Z
2020-04-17T00:00:00.000
{ "year": 2020, "sha1": "c2e4f106c3ae24a1699139054e5dd2e2e64c0f03", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2020/1398476", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "606b31c107435b4581f2af4fe26e5b5c6d4fc6be", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
248158203
pes2o/s2orc
v3-fos-license
A study on spatial variation of water flow at confluence connected to non-orthogonal channels Most studies on the flood flow characteristics at a confluence focus on channels connected orthogonally or at right angle, but studies on non-orthogonally connected channels remain limited. In this study, hydraulic-model experiments and numerical simulations are conducted to analyze the spatial variation of water flow in and around a confluence connected to non-orthogonal channels. Comparison of the measured and simulated water depth distributions in and around the confluence indicates that the results are in relatively good agreement. In the experiment where the angle between two upstream channels is 45°, the water flow pattern in and around the confluence corresponds approximately to Type I proposed by (Mignot et al. J Hydraul Res 46:723–738, 2008). However, it was found that there is no any flow type to correspond to the water flow pattern measured in the case of the angle of 135°. For analyzing the variation of the water depth in and around the confluence with inflow, numerical simulation is performed by setting the inflow ratio of the two inlet channels to one, three, and six, respectively. Introduction In most cities, crossroads connect three or four roads either orthogonally or non-orthogonally, as shown in Fig. 1a. In general, when the flood entering a crossroad is relatively less and the road gradient is not steep, the flow in a crossroad is characteristically subcritical. However, complex flood flow involving subcritical as well as supercritical flows can also be observed, as shown in Fig. 1b. Most of the existing studies on urban flooding have adopted experimental or numerical methods that mainly consider road networks as water channels, instead of observing the actual flood flows moving through roads distributed across urban areas. For example, Best and Reid (1984), Weber et al. (2001), and Neary et al. (1999) experimentally studied the flow characteristics around a confluence connected to three channels under the subcritical conditions. These studies primarily analyzed the flow structure at a confluence, such as re-circulation flow, two-dimensional flow, and flow separation and contraction. Bowers (1966) suggested that hydraulic jumps develop in the inflow channels of a confluence zone under supercritical conditions depending on the geometric shape of the confluence and the water inflow. Schwalt and Hager (1995) conducted a study to identify the major characteristics of surface profiles formed when supercritical flows develop at a confluence connected to three channels. Rivière and Perkins (2004) examined the characteristics of supercritical flows at a confluence to which three channels are connected at 90°, and Rivière et al. (2011) investigated experimentally subcritical flow in an intersection formed by four similar orthogonal channels with two inflows and two outflows for a wide range of experimental conditions. Nania et al. (2004) investigated the characteristics of supercritical flows at a confluence with two inflowing channels and two outflowing channels connected at a right angle, and divided the flows into two types according to the location of hydraulic jumps (Type I for a normal hydraulic jump at each inflowing channel; Type II for a normal hydraulic jump at either inflowing channel and an oblique hydraulic jump within the confluence). Mignot et al. (2008) conducted an experimental and numerical studies on the supercritical 1 3 132 Page 2 of 13 flow characteristics in and around a confluence where four oblique channels are connected to each other at a right angle, and accordingly divided the flows into four types (Types I, II-1, II-2, and III) based on the location of normal hydraulic jumps formed at inflowing channels, and oblique hydraulic jumps formed within the intersection (Fig. 2). The study's applicability was also verified through a comparison of the experimental results and a simulation in and around the confluence with Rubar 20 (Paquier 1995;Mignot 2005), a two-dimensional finite volume model based on the shallow water equation and second-order MUSCL technique. Abderrezzak et al. (2011) conducted an experimental study to (Mignot et al. 2008) investigate the characteristics of dividing critical flows in a 90° open-channel junction formed by three horizontal equal-width channels, and found a relationship between the discharge division ratio and the tailwater Froude number. Recently, Rivière et al. (2014) examined experimentally trans-critical flows in three-and four-channel intersections and proposed empirical correlations derived from the experimental data for the flow distribution in three-channel intersections and four-channels intersections with one or two critical sections. Shetter and Murthy (1996) numerically analyzed the flow characteristics at a confluence connected to three channels under the subcritical conditions using a two-dimensional numerical model based on the k − turbulence technique and shallow water equation. In addition, Khan et al. (2000) applied a simple turbulence model, based on the mixing length formula, and the two-dimensional model, based on the shallow water equation, for numerically analyzing the flow characteristics at a bifurcation and a confluence of three channels under the subcritical condition. To verify applicability to subcritical conditions at a confluence, Huang et al. (2002) used the threedimensional model based on the k − ε turbulence technique for comparative analysis with the experimental results of Shumate (1998). Ghostine et al. (2009) utilized the experimental results reported by Mignot et al. (2006) to verify the applicability of the two-dimensional finite element model based on the Runge-Kutta Discontinuous Galerkin (RKDG) technique by comparing the simulation results from Rubar 20 and FLU-ENT, a three-dimensional model. Jeong et al. (2010) recently numerically analyzed on the flood flow characteristics at a confluence symmetrically connected to four channels using the two-dimensional well-balanced HLLC finite volume model. Very recently, Mignot et al. (2019) reviewed the 45 existing studies available on urban flooding based on laboratory experiments to help computational and laboratory modelers. As apparent, most studies on the flood flow characteristics at a confluence focus on channels connected orthogonally or at right angle, but studies on non-orthogonally connected channels remains limited. In this study, a hydraulic-model experiment and a numerical simulation using ANSYS CFX (ver. 14) (2013), which is the commercial three-dimensional CFD model, are performed for investigating the characteristics of flood flow in and around a confluence connected non-orthogonally to four channels. The simulated results are verified by comparison with the results of the hydraulic-model experiment, and the flow characteristics at a confluence are analyzed under various inflow scenarios. Experimental methods and conditions A hydraulic model was used to analyze the flood flow characteristics of channels and a confluence composed of acrylic as shown in Fig. 3, in order to observe the water flow and includes a pump to provide water flow and an electronic flowmeter installed between the water tanks connected to the end of each channel for controlling the flow. Flow control valves and electronic flowmeters (WTM-1000, range: 0.03 ~ 10 m 3 /s, accuracy: ± 0.5% and repeatability: ≤ 0.1%) were used to control the water flow and generate flow into the channels. Then, after the flows were stabilized (within approximately 50 s in this experiement), the ultrasound water level meter (UC500-20GM, range: 0.838 to 30.480 cm, accuracy: better than 0.2% of range) shown in Fig. 4a was used to measure the water depth within a series of the grids around the confluence. The grid system included 1447 (3 × 3 cm) diamond shape grids, as illustrated in Fig. 4b. After the water flow in the confluence and channels reaches steady state, the water depth was measured 5 times for each grid and its values presented hereafter are the average of 5 measurements. The widths of the four channels connected to the confluence were identical 0.3 m each. The lengths of the two horizontally connected channels were 2.0 m each, and those of the two channels connected non-orthogonally at an inclination of 45° to the horizontal channels were 2.3 m each. In Fig. 5, ①, ②, ③ and ④ represent the channels connected in the confluence. Two cases were considered in the experiment. In Case I, channels ① and ② were the inflow channels and the angle between two upstream channels was 45°; channels ③ and ④ were the outflow channels ( Fig. 5a). In Case II, channels ① and ④ were the inflow channels and the angle Fig. 3 Set-up of the hydraulic-experiment model between two upstream channels was 135°; channels ② and ③ are the outflow channels (Fig. 5b). Q 1 and Q 2 are the inflows at the inflow channel boundaries, and h 1 and h 2 refer to the water depths corresponding to each inflow channel boundary. Q 3 and Q 4 are the outflows through the outflow channel boundaries, and h 3 and h 4 are the water depth corresponding to each outflow channel boundary. At the beginning of the experiment, there is no water flow in all channels. When the water-surface in the water tank connected to the inflow channel reaches the bottom of the inflow channel, the water starts to flow in the inflow channel. The inflow was controlled by the electronic flowmeter connected to the water tank, and the water depth at the inflow channel boundary was measured by an attached ruler. The outflow was measured by considering the volume of water filled ΔV = l × w × h 2 − l × w × h 1 (l and w are the length and width of water tank, respectively) in the water tanks for a given time interval t 2 − t 1 (Fig. 6). Table 1 summarizes the inflow conditions in the two inflow channels and the corresponding water depths for each case. In this study, the experiment was conducted by setting the inflows ratio of the two inflow channels to one (flow ratio = 1) and three (flow ratio = 3). Figure 7 depicts the spatial variation of water depth measured around the confluence for Case I after the flow reaches steady-state. In Case I-1, the water flowing through the two inflow channels approaches the confluence with a depth of 3.1 cm. After flowing into the confluence, the water depth begins to decrease, with the depth around the mouth of channels ③ and ④ connected to the confluence decreasing to an average of 1.0 cm. When the water into the confluence reaches at the point adjoining channels ③ and ④ meet (•), the flow is separated and an oblique hydraulic jump is appeared. This type of water flow around the confluence corresponds approximately to Type I proposed by Mignot et al. (2008). Experimental results and analyses For Case I-2, after flowing through channel ① at a depth of 5.0 cm and channel ② at a depth of 3.2 cm, the water flows into the confluence with a significantly lower depths of 1.0 and 1.4 cm at the exits of the channels ③ and ④, respectively ( Fig. 6.I-1). After the flow is stabilized, channels ① and ② have a similar depth distribution (5.0 cm on an average). This result can be attributed to the backwater effect: the larger quantity of water flowing into channel ② moves faster than that flowing into the channel ①, causing the water flowing through channel ① to move in a direction opposite to the flow without appropriately passing through the confluence (Fig. 6.I-2). Figure 8 displays the spatial variation of the water depth measured around the confluence for Case II. In Case II-1, the water quantity flowing into channels ① and ④ is the same at 100 l/min, and the depth of water moving through these two inflowing channels is approximately 3.1 cm on reaching the confluence. However, the water depth distribution at the confluence in Case II-1 exhibits a considerably different trend from that that of Case I-1. After the water flows into the confluence via the two channels, the water depth does not decrease; instead, it maintains the water depth before inflow. The water flow into the confluence is divided after reaching the point (•) adjoining channels ② and ③, and moves into each channel with oblique hydraulic jumps. This type of water flow around the confluence does not correspond to any flow type proposed by Mignot et al. (2008). The rapid decrease in the water depth (from 3.1 to 1.7 cm) in channels ② and ③ after passing the confluence adopted a tilted shape because of the continual supply of water from the point of divided flow. In Case II-2, because of the larger quantity of water in channel ④ flowing toward the confluence at a fast pace, the Table 1 Inflow conditions for cases I and II (①, ②, and ④: channel number) water flowing into the channel ① fails to pass the confluence and demonstrates the backwater effect, as observed in Case I-2, with the water moving toward the flow direction. The water depth distribution is observed to be greater in channel ① with the reduced inflow compared to channel ④ with a greater inflow at the initial stage. In Case I-2, the inflow to channels ① and ② is from the same direction (45°), but in Case II-2, the direction of inflow into channels ① and ④ is almost opposite directions (135° apart). This causes greater disturbance in the flows at the confluence, and thereby, a greater water depth distribution. The flow at the confluence heads toward channel ① with a relatively lesser water quantity, contributing to greater water depth. In addition, in Case I-2, an area with rapidly increasing water depth (the area marked with an oval) in channel ③ in the immediate vicinity of the confluence is not observed. Further, the results of the hydraulic-model experiments for Cases I and II are verified through numerical simulation. Table 2 shows the comparison between inflows and outflows based on the results of the hydraulic-model experiment for Cases I and II; the inflow and outflow agree relatively well in both cases. (Martinez and Niño 2020). The numerical scheme of this model is the finite volume method, which divides the computational domain into small cells for obtaining solutions by assigning a boundary condition to each cell, and the complete set of governing equations for an incompressible multiphase flow model is as follows: ANSYS CFX model Continuity equation Momentum equation Volume fraction transport equation where m is the mixture density of the fluid, ⃗ V m = [u, v, w] is the velocity vector, t is the time, p m is the pressure, is the viscous stress tensor, t is the turbulence viscous stress tensor, ⃗ f is an external force such as buoyancy, l is the density of each fluid in the mixture and l is the liquid-phase volume fraction ranging between [0, 1] for each liquid phase. If water and air are mixed, the value of l becomes 0.5 at the boundary between the two fluids. If two fluids, such as water and air, are mixed with each other, the mixture density l = ∑ 2 n=1 n m . The numerical analysis methods provided by the ANSYS CFX model for analyzing the turbulence elements include the Reynolds-Averaged Navier-Stokes (RANS), Eddy-Viscosity method, RANS Reynolds-Stress method, and Eddy Simulation method (ANSYS Inc., 2013). In this study, the standard k − model, which is a type of RANS Eddy-Viscosity method, was chosen due to its simplicity in terms of its empirical parameters and its wide use in engineering applications (Matthews et al. 1998). (1) Model verification For verifying the model applied in this study, the four different inflow conditions in Case I and II of the hydraulicmodel experiments were simulated under an unsteady state condition in order to investigate the spatial variation of the water depth with time. The channels were presumed to be flat and without inclination; the grid system comprised 153,304 nodes and 621,634 cells. The cell type was tetrahedron and the maximum, minimum, and mean cell sizes were 4.5 × 10 −7 , 2.4 × 10 −7 , and 3.8 × 10 −7 m 3 , respectively. The upstream boundaries of the two inflow channels ① and ② for Case I, and channels ① and ④ for Case II in Fig. 4 were considered as the inflow boundary conditions. The downstream boundaries of the two outflow channels ① and ④ for Case I, and channels ② and ③ for Case II in Fig. 4 were considered as the open boundary conditions. In addition, the channel bottoms and side-walls were presumed to be flat and smooth. The downstream boundaries of the outflow channels were treated as open boundary condition. The total simulation time was set to 60 s, which is sufficient to reach steady state, and the time interval was set to 0.01 s. In this verification, the roughness of the bottom and side walls in all channels was ignored because it was assumed that the acrylic surface is very smooth. Table 3 compares the outflows Q 3M and Q 4M measured at the downstream boundaries of the channels in Case I and the outflows Q 2M and Q 3M in Case II, respectively, and the simulated outflows are Q 3S and Q 4S in Case I and Q 2S and Q 3S in Case II, respectively. The simulated and measured values for each outflow condition show considerable agreement. Figures 9 and 10 compare the measured and simulated results of the change in the water depth spatially along the centerline of the four channels around the confluence, for Cases I and II (simulation time of 60 s). The results demonstrate that the spatial variation of the simulated water depth generally agrees with that the measured results. A significant difference in the water depth between Case I and Case II is observed: the flow moving from channel ① to channel ③ in Case I exhibits a drastic decrease in the water depth in the confluence and a rapid increase in the water depth, similar to a hydraulic jump, after passing the confluence; however, in Case II, the water depth maintains its previous depth within the confluence, reduces rapidly after passing through the confluence, and then begins to slowly increase. For a more quantitative comparison of the measured and simulated water depths, the three error Eqs. (4), (5), and (6) involving L 1 (absolute mean error), L 2 (root mean square error), and L ∞ (maximum error), respectively, were applied. The estimated results are depicted in Table 4. As the results from these three error equations show significantly small values, it can be concluded that the results of the numerical model applied in this study are in agreement with the measured values. Changes in water depth at the confluence with increasing inflows For analyzing the changes in the water depth in and around the confluence with increased inflow, the results of Cases I-1, I-2, II-1, and II-2 are compared with those of the numerical simulation by setting the ratio between the inflow into the channels (Cases I-3 and II-3) to six (inflow ratio = 6). The numerical simulation results alone were used for comparative analysis because the electronic flowmeter adopted for the hydraulic-model experiments, it is possible to adjust the inflow only up to 450 l/min, rendering it impossible to perform experiments that require an inflow ratio of six. (4) The inflow conditions for Cases I-3 and II-3 are shown in Table 5. Figure 11 compares the spatial variation of the water depth in Cases I-3 and II-3 at 60 s with those in Cases I-1 and -2 and II-1 and -2, respectively. As the inflows increase, the backwater effect in channel ① is enhanced, and the water depth in Case I increases more in channel ④ than in channel ③. The result is the same for Case II-3: when the inflows increases, the backwater effect strengthens in channel ①, and the water depth increases in channel ②. However, the increase in water depth in channel ③ is relatively limited. Based on the results shown in Fig. 12, the spatially changing water depths along the centerline of each of the four channels around the confluence in Cases I and II are compared, as shown in Fig. 11. The water flowing through channels ① and ③ in Case I enters the confluence with an increased water depth (approximately 3.10 cm for Case I-1, 4.85 cm for Case I-2, and 6.35 cm for Case I-3); a rapid decrease in all these water depths occurs in the confluence in Case I. After the water flows through the confluence, a hydraulic jump is observed in Case I-2 farther from the confluence compared to the Case I-1 location (1.93 m for Case I-1 and 1.63 m for Case I-2); the scale is greater in Case I-2 (1.0 cm for Case I-1 and 1.5 cm for Case I-2). In Case I-3, a smaller hydraulic jump is observed at a location similar to that of the hydraulic jump in Case I-2; however, a location within 0.6 m is associated with a gradually increasing water depth. As the inflow increases, the water depth in channels ② and ④ exhibits a clear increasing tendency. The water depth before the arrival of the flow at the confluence and within the confluence increases to a level similar to those in channels ① and ③. The decreasing tendency of the water depth differs in the confluence; as the inflow increases, the degree to which the water depth reduces also increases (3.10 cm to 1.00 cm in Case I-1, 4.85 cm to 1.50 cm in Case I-2, and 6.35 cm to 2.35 cm in Case I-3). After the flow passes the confluence, a hydraulic jump occurs at a location increasingly farther from the confluence when the inflow increases, and there is an increase in the scale as well (1.0 cm for Case I-1, 1.5 cm for Case I-2, and 2.0 cm for Case I-3). Unlike in Cases I-1 and I-2, two hydraulic jumps are observed in Case I-3, which may contribute to the smaller hydraulic jumps in channels ① and ③. In Case II, water flows through channels ① and ③ reaches the confluence at an increased water depth (the same as in Case I) as the inflow increases. In Cases II-1 and II-2, the water depth in the confluence remains almost constant and rapidly decreases immediately before passing the confluence. Further, the water depth falls to approximately 1.1 cm before increasing again. Compared to Case I, the scale of the hydraulic jumps is smaller (1.5 cm for Case II-1 and 3.25 cm for Case II-2), and they are located farther from the confluence as the inflow increases (1.35 m for Case II-1 and 1.15 m for Case II-2). In Case II-3, the water depth increases up to the midpoint of the confluence before rapidly decreasing, and falls to a level similar to those in Cases II-1 and II-2 after passing the confluence. Although two hydraulic jumps are observed after the flow passes the confluence in channels ② and ④ in Case I-3, smaller ones are found in channels ① and ③ in Case II-3. As the inflow increases in channels ② and ④, the water depth in the confluence remains almost constant in Case II-1, but rapidly falls immediately before passing the confluence. In Cases II-2 and II-3, the water depth decreases before the flow reaches the confluence, and the surface profiles are convex-shaped in the confluence area. This phenomenon becomes increasingly evident when the inflow increases, and the water depth distribution following the flow movement Table 6 shows the comparison of the maximum and minimum water depths in channels ① and ③, and in channels ④ and ② for Cases I and II. The results suggest that the maximum depth in channels ① and ③ increases in Cases I-1 and I-3 by two and three times, respectively, compared to Case I-1, and increases by a similar ratio in Case II. The maximum water depth in channels ④ and ② increases in Cases I-2 and I-3 by 1.5 times and two times, respectively, compared to Case I-1, and increases by a similar ratio in Case II. The minimum water depth in channels ① and ③ increases in Cases I-2 and I-3 by 1.1 times and 1.4 times, respectively, compared to Case I-1, but shows no significant change in Case II. The minimum depth in channels ② and ④ increases in Cases I-2 and I-3 by 1.5 times and 2.1 times, respectively, compared to Case I-1, and increases in Cases II-2 and II-3 by 2.2 times and 3.8 times, respectively, compared to Case II-1. Fig. 11 Comparison of water-surface profiles along channels with increased inflow for Case I and II Conclusions In this study, the hydraulic-model experiment and the numerical simulation were performed in order to investigate the characteristics of flood flow in and around a confluence connected to channels aligned non-orthogonally, and the following results were obtained. The experimental and numerical results of water depth distribution in the confluence with increasing inflows were in relatively good agreement. When the same inflows entered two channels, the water depth in the horizontal channel (channels ① and ③) immediately decreased after the flow entered the confluence (Case I); however, the water depth before the flow arrived at the confluence remained constant, when either of the two channels had an increasing inflow, and decreased rapidly before the flow passed the confluence (Case II). In addition, while a Fig. 12 Comparison of water-surface profiles along channels with increased inflow for Cases I and II clear hydraulic jump was observed with a rapidly increasing water depth after the flow passed the confluence in Case I, the water depth in Case II gradually increased, resulting in relatively smaller hydraulic jumps. When the same quantity of water flowed into channels ② and ④, the water depth rapidly decreased immediately after the flow reached the confluence. With increasing inflow, the water depth increased abruptly after the flow passes the confluence, producing increasingly large hydraulic jumps (Case I). In particular, two hydraulic jumps were observed in Case I-3. The water depth tended to be lower in Case II before the flow reached the confluence, and the surface profiles were convex shaped in the confluence area. This phenomenon was more evident when the inflow increased, and is associated with a consistent depth distribution after the flow passed the confluence. The results of the hydraulic-model experiment and the numerical simulation of water flows at a channel confluence, where the channels are connected non-orthogonally at 45° and 135° have only limited applicability in terms of establishing flood prevention plans for urban areas. To resolve this limitation, it is necessary to perform hydraulic-model experiments and numerical analyses of structures having connecting channels with more diverse angles. Therefore, hydraulic-model experiments with connection angles of 22.5° and 67.5° are currently underway. In this study, the variations of flood flow around the confluence were only investigated physically and numerically. In the further study, the water velocity fields in and around the confluence area will be considered.
2022-04-15T13:23:51.739Z
2022-04-15T00:00:00.000
{ "year": 2022, "sha1": "3a696c3039a08dd8fb2f543fead57f6e68b6bb57", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13201-022-01650-2.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "2a03e79eab4a8d60b4b05d42253a492b124501ed", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
237988280
pes2o/s2orc
v3-fos-license
The short-term and long-term effects of industrial pollution on human health in : The impact of environmental pollution on human health has become a consensus. Based on the provincial panel data of China from 2002 to 2017, this paper analyzes the impact of industrial wastes on human health. With respect to human health, average annual frequency of physician visits per capita (AAFPV) is used as a measure for the short-term human health; all-cause mortality is used to illustrate the long-term human health. The results show that in the short term, with the level of industrial smoke dust increasing every 1 percentage, AAFPV would increase by 0.24 percentage. This effect is significant in East China and West China. Central China is affected by industrial waste water, with a rate of increasing AAFPV by 0.12 percent for every 1 percent increase of chemical oxygen demand per unit area. In the long term, water pollution is the main influencing factor of all-cause mortality. social and economic environment, the physical environment, and the person’s individual characteristics and behaviors. Introduction Since China adopted the economic reform and opening-up polices about 40 years ago, China's economy has maintained a rapid pace of growth, creating a "Chinese-style miracle". That fast economic growth inevitably led to excessive consumption of resources and environmental pollution. According to Global Burden of Disease Study 2015 (GBD2015), China's multiple health indicators scored low, ranked only 92 in 188 countries and regions. The report by World Health Organization in 2016 on "Preventing Disease through Healthy Environments: A global assessment of the burden of disease from environmental risks" stated that there were 12.6 million deaths worldwide due to living in unhealthy environments in 2012 (An estimated 12.6 million deaths each year were attributable to unhealthy environments, 15 March 2016). The deaths from non-communicable diseases were estimated to be up to 8.2 million that were mainly caused by air pollution (including exposure to second-hand smoke). There is now a large body of medical literature that suggests environmental pollution having a direct relationship with human health. The diseases, such as cardio-cerebral vascular diseases, respiratory diseases, and lung cancer, are caused by air pollution, and they have become the main causes of deaths in China. Both cancer morbidity and mortality have surged along with economic growth [1]. Lung cancer is now a leading cause of cancer deaths in China [2]. According to the data from monitoring the causes of death as released by the Chinese Center for Disease Control and Prevention, lung cancers were ranked at the top among all causes of deaths in 2018 (53.40%). China's increasingly serious problems of environmental pollution pose a huge medical burden while seriously threatening public health [3]. The rapid industrialization, resource and environmental constraints, the population agglomeration due to urbanization and the deepening of aging population have worsen the environmental health situation in China. Worldwide, environmental pollution poses a great threat to human health. The Chinese government has begun paying special attention to environmental governance to improve citizen health. Considering these two realistic backgrounds, the topic of this article has very important practical significance. The existing literature on the relationship between environmental pollution and health from the perspective of natural sciences [4], focuses on the impacts on public health by specific environmental pollutants [5][6][7]. Those studies established the connection between specific pollutants to the environment and certain diseases. Most studies with a focus on economic issues examined the interactions between environmental pollution and economic development [8,9]. However,These literature do not build empirical models based on theory, so the choice of independent variables in the model does not have theoretical basis. Based on the theory of health production function, this paper constructs the model, which has strong theoretical significance. Based on the realistic and theoretical background discussed above, the objective of this article is to construct a regional health production function, quantitatively analyze the impact of three major industrial pollution on average annual frequency of physician visits and all-cause mortality(these two variables are regarded as proxy variables of short-term human health and long-term human health), build an intermediary between environmental pollution and economic development, and provide empirical support for further research on economic losses caused by environmental pollution. Since China joined the World Trade Organization in 2001, relevant provisions and rules have prompted China to improve and upgrade its production systems and controls of environmental pollution. Based on this, we chose 2002 as the starting year for our study. Since the implementation of the Western Development Policy, the government-led industrial transfers from eastern coastal regions to inner regions have been implemented. Such transfers have made substantial progresses in the construction of major infrastructure such as transportation, water conservancy, energy, and communications in the west. With the progress of China's market economy, the costs of land, labor, water, electricity and other factors in China's east have risen sharply. The east now has an urgent need for industrial transformation and upgrading. With the policies (such as "encouraging the eastern region to transfer industries to the central and western regions", "continuing to promote infrastructure and ecological construction in the western region", and "increasing investment in the development of the western region"), the industrial transfers are listed as the main task of national economic work in 2007. Under that, every Chinese sector has been actively exploring ways to accelerate the pace of industrial transfer between the eastern and the western regions. Since 2017, Chinese economy has shifted from pursuing quantity to pursuing quality to enter the industrial transfer based on market economy. Therefore, the ending time for this research was set to 2017. The main contributions of this paper are as follows: firstly, based on the quantitative research method, the impact of industrial pollution on human health is analyzed; The second is to analyze the difference of the impact of industrial pollution on human health from the perspective of space; Thirdly, it expands the micro level health production function from the regional macro perspective. In the subsequent sections, Section 2 reviews relevant literature on human health, and factors that may influence human health and regional health production function. Section 3 discusses how to select variables, including dependent variables and independent variables to be included in the regional health production model. Section 4 descriptively analyzes the main variables both spatially and temporally. Section 5 reports findings from statistically building an econometric model for human health, regional production function, and environment pollution. Section 6 analyzes results from the econometric model. The final section concludes the study and offers recommendations for policy makers and future researchers. 2 Literature review 2.1 Health capital and human health Different disciplines understand and define health capital differently. Relevant theories can be divided into three categories. The first category is what Schultz [10] put forward from the perspective of development economics. Schulz suggested that human capital (including healthy capital) was the quality and ability of people. The expenditure on health is a form of human capital investment, including health care expenditures. Based on this notion, Fogel's healthy human capital [11] was proposed, which is also known as healthy human capital when considering food consumption and nutrient intake. Most research believe that Fogel's healthy human capital points us to a way for improving healthy human capital. However, Wang (2012) [12] found that Fogel's healthy human capital cannot become an endogenous driving force for economic growth. It can only accelerate the pace of economic growth when a region's economic growth is also powered by other auxiliary forces. The second theory of health capital is what Sen (1999) [13] proposed from the perspective of welfare economics. He suggested that health is a viable ability and a prerequisite for human subjectivity (Subjectivity refers to the ability, role, and status that people display in the course of engaging or practicing activities in live). Losing health loses the possibility of participating in other activities, which in turn loses the opportunities for freedom and choice. In that regard, health is an important dimension of human well-being. It promotes the realization of a viable ability and is also an important component of happiness. The third theory of health capital is one that was first put forward by Folland, Goodeman, & Stano (2001) [14] from the perspective of health economics. It postulates that health is a good thing that can bring utility and can be used as a type of capital. As a reserve capital, health is a consumer product that can be purchased through health care. Health can be influenced by environmental and economic factors. Moreover, input factors for health production may include health care and non-health care factors. Non-health care factors mainly refer to lifestyles that affect health, social environment, and income status. The human health in this article is the regional health human considered from the perspective of Grossman's (1972) [15] micro-health production function, which represents the health quality and level of a regional population as a whole. Economic factors are important factors affecting human health, and the impact of income on human health is crucial. Kitawaga & Hauser (1973) [16] conducted a study on mortality in various states in the United States and found that differences in absolute income often lead to health-income stratification. That is, income is directly proportional to health. Preston (1975) [17], based on data from more than 40 countries in 1930-1960, found that income can explain 10-25% of rising life expectancy. Public health services can also be an important factor affecting human health. Influencing factors of human health According to sub-Saharan Africa Demographic and Health Survey Data Analysis (DHS), Fortson (2011) [18] found that public health improvement and epidemiological control have a direct impact on the accumulation of healthy human capital investments. In addition, Chen (2010) [19], based on the analysis of health structure data of 30 provinces in China from 1993 to 2008, found that differences in regional health care structure have a significant impact on regional health human capital accumulation. Insufficient early nutritional intake in infants and young children can have a negative impact on their health status in adulthood. Through a survey conducted in United Kingdom, Wadsworth, & Kuh (1997) [20] found that insufficient nutritional intake in infants and young children tends to increase the incidences of cardiovascular disease, coronary heart disease, etc. during their middle age. Ravelli et al. (1998) [21], based on data from Amsterdam in 1944-1945, found that Infant babies have insufficient nutrient intake due to famine in the third trimester of pregnancy, thereby increasing the incidence of diabetes when they are adults. In addition, inter-generational transmission also has an impact on health. A mother's unhealthy body may be passed on to her children through birth. Environmental pollution is also another important factor affecting human health. Qi & Lu (2015) [22] found that environmental pollution, according to the 1990-2010 world pollution data, explained 24% of global diseases and 23% of premature deaths. They found a negative relationship between health status and PM10. Miao and Chen (2010) [23], based on data from a survey conducted in Shanxi Province in 2008, found that major air pollutants PM10 and SO2 have negative impacts on residents' health demands (Thinking of health as a commodity of consumption,"health demand" refers to the demand for that commodity.), and for every 1% increase of the concentrations of two inhalable particulate matters, the health demands of residents were reduced by 0.199% and 0.127% respectively. Peng, Tian & Liang (2002) [24] used the field survey data of a municipal hospital in Shanghai in a study on the correlation between air pollutants and the daily outpatient volume of hospital respiratory diseases. Peng found that there is a significant correlation between the two. According to Health Impact Assessment (HIA) of the World Health Organization (WHO), the determinants of health include the social and economic environment, the physical environment, and the person's individual characteristics and behaviors. Regional health production function The Grossman health production function is a model constructed from a microscopic perspective. Many scholars use it as a theoretical basis for constructing a macroscopic health production function [25,26]. Puig-Junoy evaluated the health production effectiveness of OECD countries based on the Grossman micro-health production function. Since then, a large body of literature has been built up based the macroscopic approach, such as those studying health care [27], health system [28], and medical services [29]. Wang & Chang (2007) [30] constructed a regional macro-health production function. This function links economic, social, and educational and health variables for describing the overall health level of a region. In presenting their work, they pointed out that, since the reform and opening up of China, the health influencing factors have been changed. Economic factors and educational factors tend to promote health gradually, while living factors have shown a certain negative contribution to health. Feng et al. (2019) [31] used the function to analyze spatial effects of air pollution on public health in China based on spatial econometrics method. From the macroscopic point of view, this article focuses on the impact of regional environmental pollution on health status. The macro health production function is that, for a region, health is a product of environmental, social, health, educational and economic variables, or: H = F (environment, society, health, education, economy) The overall health status of the region is seen as the result of a combination of environmental, social, health, education, and economic variables in the region. Human health variables All-cause mortality has been used widely as a measure for residents' health status when studying the impact of environmental pollution on human health [22,30]. Obviously, a model that uses all-cause mortality as a dependent variable needs to consider selecting a lag period. This is because, in general, the current environmental pollution would only impact future mortality. Some scholars use self-assessment health data as the dependent variable as a workaround to avoid the problem of lag period selection [24]. From the macro level, this article examines the short-and long-term impacts of environmental pollution on human health. Specifically, average annual frequency of physician visits per capita (AAFPV)(The calculation method is the total number of physician visits in the region divided by the total population at the end of the year.) by the local residents is used as a measure for the short-term health status of the region. As opposed to that, all-cause mortality is used to illustrate the long-term health status of the region. Therefore, there are two dependent variables in this study, which are the AAFPV to measure short-term health status and regional mortality to measure long-term health status. Independent variables In order to examine the impact of environmental pollution on human health, a set of variables were carefully selected as independent variables that are related to environmental pollution. Moreover, other factors affecting human health are used as control variables. These selected variables are discussed below. Environment variables Literature on the health effects of environmental pollution has focused on the health effects of air pollution. For example, Peng, Tian & Liang (2002) [24] used content of NOX and SO2 in the air as independent variables to measure the impact of air pollution on respiratory diseases and the resulting losses. Miao & Chen (2010) [23] used the Grossman model to analyze the effects of two air pollutants, PM10 and SO2 on residents' health demands. Most existing studies focus on air pollution. On the one hand, the effects of air pollution on health are easy to observe, especially on respiratory diseases. On the other hand, the time lag effect of air pollution on health is relatively short and the deterioration of air pollution in the current period would significantly affect the health of current residents. However, in China, emissions of three industrial wastes are the main source of environmental pollution. Our study uses pollutant emissions indicators per unit area (PEIPA) as the measurement for environmental pollution rather than per capita pollutant emission indicators (PCPEI). Using PEIPA has two advantages. First, as a density indicator, it reflects the intensity of pollution emissions. The environment itself has limits on the containment of pollution, and the density indicator can reflect the level of pollution emissions in a certain region. The second advantage is that PEIPA overcomes the disadvantages of per capita indicators. In the case of per capita indicators, the more people there are in a region, the lower the per capita pollution emissions there may be. PEIPA denotes the lower the level of pollution emissions, which is not in line with experience and facts. At the same time, for reasons of data availability, the environmental pollution in a region is measured here by industrial smoke dust emissions per unit area, chemical oxygen demand discharge per unit area of industrial wastewater, and industrial solid waste discharge per unit area. Control variables This article uses environmental protection, population structure, education level, public health supply, income status as control variables. These factors have different effects on human health. (1) Environmental protection: environmental pollution would have an impact on the health of residents in a region. Proper environmental protection practices can eliminate and reduce the effect that environmental pollution would impact on health. The degree to which environmental pollution control directly affects residents' health in the polluted areas. Taking into account the availability of data for successive years, we use the proportion of industrial pollution control investment to GDP of the region as the indicator for environmental protection in this study. (2) Population structure: most studies in the current literature indicate that there exist significant differences in the health status between those for men and those for women. Zhao & Hou (2005) [32] used the Grossman model to analyze the health demands of urban residents in China from the perspective of human capital. Existing studies found that the educational level of women has a positive impact on their health, while the educational level of men has no significant effect on their health. In addition, age has a greater impact on men's health than women. Wang & Chang (2007) [30] constructed a Chinese health production function from a macro perspective and found that the increase in the proportion of women reduces the health level. This study selected the proportion of women to total population as an indicator to measure the population structure. (3) Education level: One of the important factors affecting health is the level of education. Compared with people with higher education level, people with lower education level have many disadvantages. They tend to lack effective ways to obtain real health information, which often lead to less healthy lifestyles. There is also a lack of ability to choose jobs with lower health risks. This study selects the proportion of illiterate and semi-literate people over the age of 15 as an indicator as the educational level of a region. (4) Medical supply level: Health care improves the health of residents through the treatments and prevention of diseases. The level of medical supply in a region is important for the health of residents in the region. In this study, we choose three indicators as the level of medical supply in a region, such as health expenditures, the number of doctors per 1,000 people, and the number of hospital beds per 1,000 people. (5) Income level: Income level determines the living standards of residents and the ability and access to medical services. The increase in income can improve the health of residents by improving their daily diets and nutritional levels, housing and living environments, the quantity and quality of health care services, and increasing investments in education to accumulate educational human capital. Due to the differences in those aspects between regions, the selection of per capita indicators can more accurately reflect the impact of changes in income levels on population health. When combined with other available data, the regional per capita disposable income of residents is selected to reflect changes in income levels. On the other hand, according to previous research, in the initial stage of industrialization, economic development tends to bring a series of health problems. The increase of personal income likely causes some damage to health. For example, environmental pollution would threaten individual's health, but as the economy continues to grow, the accumulation of personal wealth may improve a person's health as she/he becomes better able to find ways to shield off negative effects by environmental pollutions. Therefore, the squared value of per capita disposable income of residents is introduced to verify this problem in this paper. Data sources The data selected in this study are the inter-provincial data of 30 provinces (excluding Tibet because its data is not available, the same below) in China from 2002 to 2017. The data are all from the China Statistical Yearbooks and the China Environmental Yearbooks (China Yearbooks Full-text Database). To adjust for inflation, environmental protection expenditures and health fiscal expenditures, per capita disposable incomes of residents of the region were recalculated using the price of 2000 as a base period. The absolute amounts of industrial pollution control investment and health fiscal expenditure were converted into proportions of these expenditures. The variables selected for this study are summarized in Table 1. The spatial distribution of CODPA in Figure 1c shows that CODPA in some regions, such as Shanghai, Tianjin, Jiangsu, and Guangdong, are higher than that of the national average. Figure 1d denotes spatial distribution of ISDEPA, which shows that ISDEPA in some regions, such as Hebei, Liaoning, Shandong, Inner Mongolia, Xinjiang, Shanxi, Heilongjiang, and Jiangsu, are higher than that of the national average. Figure 1e denotes spatial distribution of ISWPA, which shows that ISWPA in some regions (such as Shanghai, Shanxi, Liaoning, Hebei, Shandong, Tianjin and Jiangsu) are higher than that of the national average. From these spatial distribution maps, industries that emit large amounts of industrial wastewater are concentrated in coastal areas such as Shanghai, Jiangsu, and Guangdong. Industries with large emissions of smoke and dust are concentrated in the northern margin of China. Industries with mass solid waste output are concentrated in central coastal area such as Shanghai, Jiangsu, Hebei, and Tianjin. Changes in the main variables over time For study areas, we chose Beijing, Shanghai in the east, Chongqing in the west, and Hubei in the middle as the typical regions for analysis. From Figure 2a, Mortality fluctuated year by year, but the mortality rate in Chongqing was significantly higher than that in other regions with Guangdong having the lowest mortality rate. Figure 2b shows that AAFPV was increasing year by year, but Beijing and Shanghai had sudden decreases in 2016 and 2017 when Guangdong Province had a sudden increase. Figure 2c shows that ISWPA has an inverted U-shaped relationship, especially in Shanghai. It reached its highest emission in 2010 and then began to decrease after that. Figure 2d shows a downward trend of CODPA in Shanghai. Other regions had shown volatility in their CODPA trends, though they had been decreasing in recent years. In the meantime, the largest reduction in ISDEPA was in Guangdong Province. Since the 13th Five-Year Plan of China, the implementation of environmental protection policies has been stricter, which has led to industrial transfers, along with improvements in environmental governance technologies. It is worth mentioning that Beijing has been dealing with sandstorms and "great urban disease", resulting in a large number of industrial enterprises moving out and floating population flowing out. This explains the above data characteristics to a certain extent. Unit root test For time series data, some non-stationary time series may have the same trend over time, but there may be no relationship between them. If these data are used directly, a phenomenon of "pseudo-regression" will happen based on econometric theory. This will lead to no practical significance in the research done with this empirical method. For the panel data, the "pseudoregression" phenomenon can be effectively avoided by testing the stationarity of each time series. The commonly used testing methods are LLC test and Fisher-ADF test. In this paper, the unit root tests of panel data was performed by LLC test, Im-Pesaran-Skin test, Fisher-ADF test and Fisher-PP test. Table 2 shows that each variable is instability according to the level value of these tests. But the first-order difference of each variable is stable, so we consider the variables to be first-order single series. Notes: *, **, *** represents the significance level of 10%, 5%, 1% respectively. The following is the same as it. Cointegration test From the unit root tests of the variable, the selected variables are all first-order single series, which meet the preconditions of the cointegration test. The test results show that value of t-Statistic is -7.558 and p-value is less than 0.001. These suggest that there is a long-term cointegration relationship between dependent variables and explanatory variables. Modeling human health and environmental pollution Since this paper focuses on comparing the differences between regions, in order to simplify the analysis, it is assumed that the structural differences within the region are not considered, that is, the slope terms within each region are equal, and only the individual effects are considered, and the time effect is not considered. Therefore, it is only necessary to test whether the model is a hybrid estimation model, an individual fixed effect model, or an individual random effects model. According to the different constraints of the intercept term, the model may be a hybrid estimation model (no individual influence and the explanatory variable coefficient is unchanged), a variable intercept model (there is individual influence and the explanatory variable coefficient is unchanged); Depending on whether the individual effect is related to the explanatory variable, the variable intercept model may be an individual fixed effect model or an individual random effects model. The F-test and Hausman test are usually used to decide the model form, and the panel data of the eastern, central and western regions are used for model test. The results are shown in Table 3. It can be seen from Table 3 that the F-test statistic of Average annual number of physician visits per capita in the country, the eastern and western regions is significant at the significance level of 1%, so the null hypothesis is rejected that the intercept term is unchanged. For the eastern, central and western regions, the model should be set to the variable intercept model. At the same time, further from the results of the Hausman test of the three models, it can be seen that the test statistic H is significant at the significance level of 1%, thus rejecting the null hypothesis that the individual effect is independent of the explanatory variable. For the national, eastern and western models, the model should be set to the individual fixed effect model. Because the number of the central provinces too small to carry out the Hausman test, in order to compare the eastern, central and western regions, the individual fixed effect model is still adopted for the central regions. [22] examined the impact of environmental pollution on life expectancy, mortality, labor supply and labor productivity, they pointed out that there could be a certain lag period for the impact of environmental pollution on economic growth. In other words, the environmental pollution produced in the current economic development process would not immediately affect the economic growth right away. While examining the impact of environmental pollution on human capital (life expectancy and mortality), most studies did not consider the problem of lag period. However, there is often a significant lag period in the impact of environmental pollution on health. This is especially the case when variables are less sensitive to short-term environmental pollution, such as life expectancy and mortality. Wang & Chang (2007) [30] found that the impact of regional GDP on mortality reached the maximum when the lag period is set the 8th one in the analysis of the healthy production function as a single influencing factor. Since AAFPV was used as one measure for human capital, this variable itself was more sensitive to environmental pollution than mortality and life expectancy, especially to air pollution. Therefore, we set the impact of industrial smoke dust emissions on AAFPV as the current effect, while industrial wastewater discharge and industrial solid waste emissions as having a lag effect behind the first phase. This was to analyze the impact by the previous phase of industrial wastewater and solid waste on AAFPV. This article builds a short-term health production function as follows. Inaaf pvit =β0 + β1Inisdepait + β2Incodpait−1 + β3Iniswpait−1 + β4pheit+ The data of 30 provinces in China will be estimated in 2002-2017. Due to the large differences between regions, when these data are used for regression analysis, there will be biased results. Hence the data were divided into three parts that is Eastern, Central and Western respectively. The coefficient estimates of the model are shown in Table 4. An R 2 of 0.9 or higher indicates that the regression model has a good fit describing the association between independent and dependent variables. The F-statistic of the regression model also passes the significance test and the DW statistic value is between 1.7 and 2.4, which indicates that there is no sequence autocorrelation between independent variables. From the nationwide model, ISDEPA shows a significant impact on AAFPV. CODPA and ISWPA have not passed the significant test. With every 1 percent increase in ISDEPA, AAFPV would increase by 0.24 percentages. This is in line with the fact that the effect of air pollution on health has a short time lag [34]. The proportions of health expenditure to total fiscal expenditure and to the number of beds per thousand people (BED) have significant impacts on AAFPV. Every 1 percent increase of the proportion of health expenditure (PHE) would likely increase AAFPV by 4.7 percentage. Also, a 1% increase in PHE is associated with a 5.1 percent increase in AAFPV. When PIP is increased by 1 percentage, AAFPV is likely increase by 0.6 percentage. The proportion of female population to total population does not have a significant impact on AAFPV. Although from a genetic point of view, women's life expectancy is indeed longer than that of men, there is not yet a consensus on how different genders differ in health status. In terms of geography, ISDEPA in both the eastern and western regions have showed significant negative impacts on residents' health. For every 1% increase in ISDEPA, AAFPV increases by 0.46 and 0.173 percentage points respectively. CODPA in the central regions has become the main source of pollution. Due to the time lag, a 1% increase in CODPA is likely to cause 0.12% increase in AAFPV in the central regions in the subsequent year. Most studies on spatial aggregation of water pollution suggest that water pollution in the eastern coastal regions are significantly more severe and concentrated than those in the central and western regions [35,36]. Shi et al. (2017) [37] suggests that the spatial pattern and evolutional structure of the discharge of industrial water pollution found that the reduction of CODPA in China from 2005 to 2010 can be attributed mainly to the adoption of new technology that purification treatment waste water before releasing it by the industries of paper and paper products. Among them, the papermaking and paper products industries in the western region contributed prominently, followed by the eastern region. The central region contributed only slightly to water pollution while the northeast region actually helps to bring down the national average level of water pollution. Recently water pollution-intensive industries have moved into the central regions, but the technical effect of adopting new technology for emission reduction is not obvious. The wastewater pollution in the central regions have a greater impact on the health of residents than that in other regions. The increase in health expenditures in the eastern, central, and western regions would increase AAFPV, but such increase is the highest in the eastern region, followed by that in the central region, and lastly in the west region. The number of hospital beds and the number of physicians per 1,000 people both have significant positive impacts on AAFPV in the western region. There is a significant "U"-shape trend of the relationship between per capita disposable income of residents and AAFPV of the central and western regions. That is, as the per capita income level increases, AAFPV increases first, then decreases with time. Long-term effects This article constructs a long-term healthy production function with mortality as a dependent variable as follows. where i=1, 2. . .30 indicates the provinces/regions, t =1, 2. . .16 indicates the number of periods, and a, b, . . ., h indicates the lag periods. According to the long-term health production function model, the data of the provinces in 2002-2017 are used in the regression analysis to obtain estimated regression coefficients of the independent variables. To account for the significant lag period of the independent variables on mortality, the lag periods were introduced into the regression model. Estimated regression coefficients of the model are shown in Table 5. ISDEPA and ISWPA of the current period show no significant impact on the current mortality. That is the same with ISDEPA of one-year time lag period and of two-year period and with ISWPA of lag period. On the other hand, CODPA shows a significant impact on mortality. At the 5% significance level, CODPA with one-year time lag shows the greatest impact on mortality. However, there is no effect on the mortality rate when the lag period is three years. When considering the regression coefficients, each 1% increase in CODPA with a time lag of one year and two years is associated with 0.033% and 0.036% increases in the mortality rate, respectively. The impact of changes in health care services on mortality is also different. The impact of health expenditure on mortality is not significant. The number of physicians per 1,000 persons shows a significant impact on mortality. Specifically, given the number of physicians per 1,000 persons increases by 1 percentage, mortality would decrease by 0.012 percentage. At the 10% significance level, an increase in the number of hospital beds per 1,000 persons will cause an increase in mortality. The lag period of the impact of education on mortality is long. The illiteracy rate in current period has no significant effect on mortality. The illiteracy rate of the three-year time lag does show a significantly positive impact on mortality. When the illiteracy rate increases by one percentage, the mortality would increase by 0.2 percentage. The long-term impact of gender difference on health capital is more significant than that of the short-term impact. At the 5% significance level, the proportion of female population in the current period with one-year time lag has a significantly positive impact on mortality. The proportion of female population in the current period not only affects the current mortality rate, but also affects the mortality rate of the future period. However, based on the estimated regression coefficient, the result seems to contradict with those concluded in existing studies that suggested "women's life expectancy is higher than men". To that end, we wish to point out that Brettingham (2005) [38] suggested that, as more modern women accepting the concept of "work hard, more entertainment", the long-standing differences in life expectancy between men and women may disappear eventually. In addition, Brettingham predicted that there may be similar life expectancy between men and women in 2010. Wang & Chang (2007) [30] used the data of 1952-1984 and 1985-2003 to construct China's macro-health production function, respectively. The results showed that the female population had a completely opposite effect on mortality during the two time periods. In the early days, the proportion of the female population effectively reduced the mortality rate. With the development of the economy and society, Chinese women have been facing more social pressures, some female-specific diseases have seen increases in numbers. These included cervical cancer and breast cancer, and the like [38,39]. In addition, after the 1990s, Chinese women's postpartum depression and suicide rate were also increasing year by year [40,41]. These studies have all reflected that the impact of the increase in proportion female in population on mortality has changed significantly from a significant reduction in mortality to an increase in mortality. As LPPCDI increased, the mortality rate rose firstly and then decreased. Both LPPCDI and its squared values passed the significance test. With the increase in LPPCDI, the mortality rate showed a U-shape trend, first decreased then increased. When the per capita income level was low, the increase of income could harm the level of health. There is a positive correlation between LPPCDI and mortality. With the accumulation of personal wealth, people could pay more attention to the input of human health capital. The increase in income level likely result in better medical services and more time for physical exercises. At this time, the increase in income could effectively reduce the mortality rate. Liang (1994) [42] proposed that there was a certain relationship between the level of economic development and mortality at the regional scale. However, such relationship between economic development and population mortality is not a linear relationship, but a curve of approximate logarithm. Discussion and conclusion This article divides the impact of industrial pollution on human health into short-term effects and long-term effects. In the short-term, the impact of the industrial pollution on human health is reflected in the increase in AAFPV. In the long-term, the industrial pollution is linked to the increase in all-cause mortality. In the estimation of the regression coefficients of short-term and long-term healthy production functions, there is a difference in how significant the pollution variables might have on human health. In the short-term, ISDEPA has a significant impact on AAFPV but CODPA and ISWPA have no significant impact on AAFPV. In the long-term. CODPA has a significant impact on all-cause mortality while ISDEPA and ISWPA have no significant impact on all-cause mortality. In terms of regional differences, AAFPV in the eastern and western regions are greatly affected by ISDEPA but is less affected by CODPA and ISWPA. The central region is opposite to the eastern and western regions. The central region's CODPA and ISWPA show to have a high impact on AAFPV but less by ISDEPA. In addition, among the control variables, health fiscal expenditure and illiteracy rate show no significant impact on regional all-cause mortality, but it is associated with increases in AAFPC in the short term. The increase in LPCDI would cause the all-cause mortality to follow a U-shape trend of increasing initially then decreases. AAFPV of residents in central and western regions in the short term also show to follow similar U-shape growth trends with an increase in LPCDI. In order to comprehensively improve the overall healthy level of Chinese residents, we propose the following policy recommendations: (1) As the eastern, central, and western regions are shown to be sensitive to different types of industrial pollution, and the eastern and western regions should pay more attention to air pollution control. The central region should pay attention to the treatment of water pollution and solid waste pollution. (2) We should pay attention to women's health issues, especially the prevention and treatment of diseases of high incidences such as cervical cancer and breast cancer to improve the level of women's mental health and health investment. (3) We should improve the education level of residents, promote widespread knowledge in health care and raise awareness of medical and health knowledge among residents. (4) We should further reform the system of public health care to improve its operational efficiency with better government's supervision of the health sector. (5) We might further develop the market economy to play a leading role in promoting the adoption of new and efficient pollution-reduction technology in developed regions and in promoting the construction of infrastructure in backward regions to support the transfer/upgrade of industries in developed regions. Measuring the long-term health of a region, life expectancy is a relatively stable indicator. But this indicator can only be obtained at the time of the census. Therefore, this article chose to use all-cause mortality indicators. All-cause mortality in a region often shows great volatility over time. From the descriptive analysis discussed earlier, all-cause mortality in developed regions (such as Guangdong, Beijing, and Shanghai) are much lower than those in the central and western regions (such as Hubei and Chongqing). Therefore, all-cause mortality can still reflect the health of a region in the long run. The increase in AAFPV, in the case of same medical service levels and income levels, can reflect levels of people's health in different regions. We considered two indicators (BED, LPCDI) as control variables in our analysis. Although the analysis in this article was somewhat abstract and macroscopic, this type of analysis is necessary, and it offers important baseline understanding. Based on the analysis discussed here, the next step can quantitatively measure the loss of human health caused by various industrial pollutions. In the analysis of long-term influencing factors, the effect has hysteresis, and the lag period is difficult to grasp. In addition, it is difficult to separate other influencing factors and avoid endogenous problems. This is a problem that we need to study further in the later period. Funding This study was financially supported by Major Program of National Fund of Philosophy and Social Science of China (No. 18ZDA132).
2021-08-27T17:01:27.666Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "0d561e33d265834f199cd58b1c96f0250490381f", "oa_license": "CCBYNC", "oa_url": "https://www.syncsci.com/journal/HE/article/download/HE.2021.01.002/471", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "7b2bbbb45caa34ad40d24b57d7e69fdd11d7a228", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Business" ] }
221706502
pes2o/s2orc
v3-fos-license
Linearized Riccati Equation as a Tool for Nonlinear Optics of Weakly Excited Two-Level Systems Linearized version of the Riccati-type differential equation for the ratio of population amplitudes in a laserdriven two-level system is used to calculate analytically the induced electric dipole moment. The formula found for the dipole moment is valid for weak excitations by smooth-shape pulses of arbitrary off-resonant frequencies, i.e., those producing neither exact one-photon nor odd multiphoton resonances. For a given pulse shape, the formula allows to express the Fourier components of the induced dipole moment as functions of both laser frequency and intensity and to find the field-dependent refractive index of the system versus laser frequency. Introduction The model of two discrete levels is known [1] to serve as the most popular paradigm for different phenomena from the scope of light-matter interaction. Despite its simplicity, the model gives deep understanding of both resonant and off-resonant processes because it can be solved analytically in an approximate way even if no rotating wave approximation is made. In the last 25 years, this twolevel model has been extensively used in, e.g. theoretical investigation of the role played by bound-bound transitions in generation of high harmonics of light from the matter exposed to linearly polarized laser field [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21]. Generation of harmonics of a given light beam is a key phenomenon in nonlinear optics because this process is a source of new radiation of often much higher frequency than that of the primary beam. Such new radiation is required for experimental spectroscopy, for example. In the investigation mentioned, both the regimes of weak excitations [5,9,19] and strong excitations [3,5,8,17,19] were considered. For both regimes the main aim of the papers cited above was to calculate the induced electric dipole moment in order to analyse the photon-emission spectrum of the system. To name a few, we point to some approaches used in the calculations, e.g. the optical Bloch equations [2,7,8,12,17,21], the appropriately transformed equations for the level population amplitudes [3,9,11], the Mathieu-type differential equation for the amplitudes [5], the Floquet-Green formalism [18] and the Riccati-type equation for the ratio of the population amplitudes [19,20]. The present paper is a significant extension of Sect. 4.1 of our previous paper [19], where weakly excited two-level * corresponding author; e-mail: parzynsk@amu.edu.pl model was solved analytically by applying the Riccatitype differential equation for population amplitudes, but under two restrictive approximations, i.e., the approximation of square temporal profile of the laser pulse and the approximation of low laser frequency. The first approximation idealized the real laser pulse, completely neglecting its finite turn-on and turn-off times, while the other assumed the laser frequency to be much lower than the transition frequency (the so-called multiphoton excitation regime). Due to these restrictions the applicability of our previous solution was strongly limited. Now, we take into consideration both smooth temporal profile of the laser pulse and arbitrary frequency of light. Mathematically, it is much more challenging task. Nevertheless, we shall find an analytical solution covering much broader scope of applicability than previously. As we are interested in the regime of weak excitations too, we shall solve the so-called linearized form of the exact Riccati-type differential equation and then use this solution to derive an analytical formula for the induced electric dipole moment. Particular forms of the formula for the dipole moment will be obtained corresponding to different laser frequency and laser strength limits. Also, amplitudes of harmonics generated and field-dependent refractive index of the system will be discussed as functions of laser frequency. Riccati equation and its linearization For a two-level system in a laser field, the Riccati-type equation is that for the ratio R = C 2 /C 1 . Here, C 1 and C 2 are the time-dependent population amplitudes of the two opposite-parity field-free states, i.e., the lower state 1 (initially occupied) and the upper state 2 (initially empty), respectively. This differential equation is of the form [19,22]: and results from the pair of equations for the statepopulation amplitudes [23]: where the dots over R and C j stand for the time derivatives. In the electric dipole approximation for the interaction Hamiltonian but without employing the rotating wave approximation, with ω 21 = ω 2 − ω 1 being the transition frequency, the laser-field frequency ω 0 , the smooth envelope of the laser pulse 0 ≤ f (t) ≤ 1, and the standard Rabi frequency Ω R = µ 21 · ε 0 / expressed by the dipole transition matrix element µ 21 = 2| er |1 and the amplitude ε 0 of the electric field of the linearly polarized beam. The variable R from (1) gives complete information about the population evolutions, |C 1 | 2 and |C 2 | 2 , and the induced electric dipole moment is the state vector of the laser-driven two-level system. In terms of R, the state populations and the induced dipole moment are expressed as where we made use of the conservation law for the total population probability, |C 1 | 2 + |C 2 | 2 = 1. Throughout this paper we focus on weak excitations, |C 2 | 2 1, meaning |R| 2 1. In this case, the term quadratic in R in (1) is much smaller than the other term on the right-hand side and we can use the approximate procedure proposed in [19]. In short, we initially drop out this quadratic term and start from the zero-order solution to (1): Then, the R 2 term is included by the substitution R = R 0 + R 1 with the restriction that |R 1 | |R 0 |. This two-part R, when substituted to (1), gives a different Riccati-type equation but for R 1 now. However, rejecting the smallest R 2 1 term in the equation for R 1 , we reduce this nonlinear equation to the linear first order differential equation As distinct from Eq. (1), Eq. (9) has the exact solution where The above solution for R 1 can be expressed in a slightly different form using the relationsṘ 0 = iQ * anḋ R * 0 = −i Q. Equations (8) and (10) give a formal solution R = R 0 + R 1 to (1) in the case of weak excitations. The procedure leading to such a solution is called the linearization of the starting Eq. (1). In [22], Rostovtsev et al. have presented an alternative linearization procedure to the above one. In their approach, the R 2 term in (1) was replaced by However, by putting R = R 0 + R 1 one converts such approximated Eq. (1) for R into our Eq. (9) for R 1 . It means that the two linearization procedures [19,22] of Eq. (1) are equivalent. Rostovtsev et al. have shown numerically that this linearization procedure gives accurate results for the excitation probability. Calculation of R 1 (t) The integrals over time in (8), (10) and (11) can be calculated analytically for an arbitrary smooth-shape function f (t), but in an approximate way. To this end, we make use of the two properties of f (t), i.e., it is zero at the initial time t 0 and is, together with its powers, the slowest function as compared to the other timedependent functions in integrands when no exact oddphoton resonances occur in the system. Assuming the lack of such resonances, we perform the integrals by parts and neglect the emerging integrals including the time derivatives of both f and higher powers of f . Beyond one-photon resonance, i.e., for ω 0 = ω 21 , Eq. (8) for R 0 gives along this line where x = Ω R /ω 0 and y = ω 21 /ω 0 are dimensionless field-strength and field-frequency parameters, respectively. According to the applied linearization procedure, it has to be |R 0 | 2 1 entailing the limitation where we have replaced cos 2 (ω 0 t) and sin 2 (ω 0 t) by their time average values and f 2 (t) by its maximum value. In the limiting case of low laser frequencies (y 1, multiphoton excitation) Eq. (13) leads to (x/y) 2 2, while for high laser frequencies (y 1) to x 2 2. For other laser frequencies, fixed in y, the strength parameter x fulfilling Eq. (13) can be estimated from Fig. 1 (solid line). Using (12) for R 0 andṘ * 0 = −i Q, we find that the exponent in Z (t, t ) of (11) is where We recognize in a the standard Stark shift of the transition frequency, while b (c) is the ratio of this shift to double laser (transition) frequency. Due to (14), we have after neglecting the minor contribution coming from d f 2 (t ) /dt . As a result, the integrand in (10) is (18) Then, we apply to Z (t, t ) the Fourier-Bessel expansions [24,25]: where n runs all positive and negative integers, J n (q) is the first kind Bessel function, and I n (q) is the modified Bessel function. For positive n, the Bessel function is represented by the series while the series for I n (q) is obtained from the above one by removing the factor (−1) n . For negative n, one needs to use the relations J −n (q) = (−1) n J n (q) and I −n (q) = I n (q). After applying these expansions to Z (t, t ), we shift appropriately the summation indices in (18) to obtain the common function exp i (ω 21 + (2n + 2m + 1) ω 0 ) t + a f 2 (t )dt for all terms. Then, we assume the lack of any higherorder odd-photon resonance in the system (ω 21 = N ω 0 with N being positive odd number) and such detuning from this resonance that the Stark shift |a| |(ω 21 + (2n + 2m + 1) ω 0 )|. As a consequence, we can integrate Eq. (18) over t , in order to obtain R 1 from Eq. (10), using the procedure described at the beginning of Sect. 3. As a result we obtain and y + 2(n + m) + 1 needs to be non-zero. General formula for the dipole moment We use the approximate solution R(t) = R 0 (t) + R 1 (t) to Eq. (1), with R 0 (t) given by (12) and R 1 (t) by (22), to write Eq. (7) for the induced electric dipole moment as and (25) Since |R| 2 1 for weak excitations, the dipole moment is (to the first approximation) determined by the sum of Eqs. (24) and (25) only. The part d 0 (t), coming from R 0 , is known from the standard first-order perturbation theory when this theory is applied to (2) and (3) for the population amplitudes C 1 and C 2 . The other part, d 1 (t), comes from our R 1 (Eq. (22)) obtained by solving analytically the Riccati-type Eq. (1) with the use of the linearization procedure. Part d 1 (t) describes the generation of odd-order laser harmonics by the two-level system. The present d 1 (t) covers the case of a smooth pulse, f (t), of arbitrary frequency, ω 0 , and thus broadens substantially the earlier results, ( [19], part 4.1) and ([5(b), part II), obtained along different lines for the square pulse of low frequency (y 1) only. We mention that the coefficients A n ,m n,m include the Bessel functions of arguments dependent on x = Ω R /ω 0 , where Ω R = µ 21 · ε 0 / , thus each frequency component of d 1 (t) depends on a series of different powers of both the electric field ε 0 and the transition dipole µ 21 . Particular limits of d 1 (t) In different (physically essential) limits (25) takes much simpler forms. 4.2.1. Case y 1 First, we consider the case of low laser frequencies (ω 0 ω 21 , y 1). In this case, the parameter b from (16) For y 1, we however have the restriction (x/y) 2 2 resulting from the zero-order solution R 0 (see (12) and (13)). Thus, we always have c 1. It means that the two generalized Bessel functions I ν in (23) are very close to 1 for m = m = 0 and are very close to zero for other indices m and m . We are, thus, allowed to make the approximations I m c f 2 (t) = δ m,0 and I m c f 2 (t) = δ m ,0 , where δ α,β is the Kronecker symbol. For y 1, it is also justified to retain only the leading term, linear in y, in the coefficients at the Bessel functions J ν in (23). In the limit of y 1, Eq. (25) for d 1 (t) is thus reduced to where 1, the coefficient B n n can be further simplified by retaining only the leading term in the series representation of a given Bessel function J ν b f 2 (t) . One can see that, for a reasonable y ≈ 10, the parameter b remains much smaller than 1 even for x as large as 1, i.e., for nominally strong fields. Case y 1 In the opposite case of high laser frequencies (ω 0 ω 21 , y 1), the parameters b and c convert into b = −x 2 y/2 and c = b /y = −x 2 /2, respectively. Both |b | and |c | are much smaller than 1 because the applicability condition for the zero-order solution R 0 requires x 2 2 when y 1 (see (12) and (13)). Now, 1/y becomes the dominant term in the coefficients at the Bessel functions J ν in (23). With only this term retained, Eq. (25) for d 1 (t) transforms, for y 1, into the simpler form Case y = 0 Equations (28) and (29) cover the limiting case of the degenerate two-level system (ω 21 = 0, y = 0). In this case, b = 0, and c still keeps a small non-zero value −x 2 /2. Thus, each Bessel function J ν in (29) behaves like the Kronecker symbol and the coefficient C n ,m n,m is made proportional to (δ n,1 − δ n,0 − δ n,−1 + δ n,−2 )δ n ,0 . When performing the summation over n and n , as required by (28), we find that the contributions from δ n,1 and δ n,−2 cancel mutually, since I ν = I −ν , and the same concerns the contributions from δ n,0 and δ n,−1 . As a result, d 1 (t) from (28) becomes zero for y = 0. Consequently, no dipole moment is induced in the degenerate two-level system since d 0 (t) given by Eq. (24) is also zero when y = 0. This conclusion is consistent with the general outcome of the exact Riccati Eq. (1) for ω 21 = 0. In the limit ω 21 = 0, the coupling parameter Q(t) given by (4) is made real and Eq. (1) has the exact solution for any Q(t). As purely imaginary for ω 21 = 0, this solution produces no induced dipole moment from (7). Arbitrary y but both |b| 1 and |c| 1 Now, we focus on the case of arbitrary laser frequencies (y) but such field strengths (x) that both |b| 1 and |c| 1. Then, each Bessel function J ν and each modified Bessel function I ν in (23) for A n ,m n,m can be approximated by the first term in its series representation (see (21)). Moreover, we are also allowed to retain in (23) only what results from the Bessel function J ν of the lowest order |ν|. Due to (16), these approximations should work well when the parameters x and y fulfil the set of two inequalities (31) In Fig. 1, we present the functions f 2 (dash line) and f 3 (dot line) versus y and compare them with the function f 1 (solid line) defined by (13) and imposing additional restriction on x 2 . These three restrictions, namely Eqs. (13), (30), and (31), have to be reconciled simultaneously. Thus, for a given y, the strength parameter x needs to be chosen as that satisfying the inequality where min (f 1 , f 2 , f 3 ) means the smallest function of the three ones. Figure 1 can be helpful in quick estimation of the related parameters x and y for which one can apply the above described approximations. (33) Obviously, the above formulae are valid outside the exact odd-order resonances, i.e., for y = 1, 3 in (32) and y = 1, 3, 5 in (33). As seen, the N -th component oscillating at frequency N ω 0 is made proportional to x N in the approximation of both |b| 1 and |c| 1. With increase in N , the dependence of the N -th component on the laser frequency parameter y = ω 21 /ω 0 gets more and more complicated due to increase in the number of essential combinations (n, m, n , m ) that have to be included. Dipole moment when both |b| 1 and |c| 1 Now, we estimate the effect of the denominator in the definition of the dipole moment, d(t) = d0(t)+d1(t) 1+|R| 2 , on d(t). Since |R| 2 1 for weak excitations, this effect can be found by replacing |R| by |R 0 | |R 1 | and using the power expansion Along this line we get where the correcting term is As an example, we shall find this correction under the same assumptions as in Sect. 4.2.4, i.e., that both |b| 1 and |c| 1. In analogy to the previous decomposition cor (t) of the total correction oscillates at frequency N ω 0 . A given component d N ω0 cor (t) can be obtained elementarily by using (12), (24), (32) and (33). The component d 1ω0 cor (t) was found to be nonlinear in x and, as much smaller than d 0 (t), it was rejected. However, the components d N ω0 cor (t), for N ≥ 3, turned out to be comparable to d N ω0 1 (t) given by (32) and (33). We have found that, for N = 3, the leading contribution (∼ x 3 ) to d 3ω0 cor (t) comes from d 0 (t) |R 0 | 2 and is d 3ω0 x 5 y After including those corrections, the dipole moment is where d N ω0 (t) means the leading term in the dipole component oscillating at frequency N ω 0 . Obviously, d 1ω0 (t) = d 0 (t) and is given by Eq. (24). For N ≥ 3, we however have d N ω0 (t) = d N ω0 cor (t). In conformity with Eqs. (24), (32), (33), (35), and (36), one obtains d 1ω0 (t) = d 0 (t) = 2µ 12 xy Since x = Ω R /ω 0 , where Ω R = µ 21 · ε 0 / , the leading term in a given dipole component is proportional to the appropriate even power of the transition dipole µ 21 and the appropriate odd power of the electric field amplitude ε 0 . To draw the time-independent factors in (37)-(39) as functions of laser frequency ω 0 , we introduce the parameters ρ = Ω R /ω 21 and z = y −1 = ω 0 /ω 21 linked to x through the relation ρ/z = x. As distinct from x and y, the present laser strength and laser frequency parameters ρ and z, respectively, have the transition frequency ω 21 in their denominators. After expressing (37)-(39) in the language of ρ and we go to the coefficients a N ω0 (z) = d N ω0 (t)/µ 12 ρ N f N (t) cos (N ω 0 t) completely determining the dependence of the amplitudes of the dipole components d N ω0 (t) on laser frequency parameter z = ω 0 /ω 21 . This dependence is shown in Fig. 2. a 0 (y) = 2y a 4 (y) = y 8 The first term in Eq. (44) is obviously equal to Eq. (37), while the other terms point to the dependence of the amplitude of the oscillation at frequency ω 0 on higher powers of the laser strength parameter x. For a given ω 0 , these additional terms describe the dependence of the refractive index n of the two-level system on strength of the laser field since from (44) and the Lorentz-Lorenz formula we get where N is the density of medium, b n (z) = a n (y) /z n+1 with z = ω 0 /ω 21 and y = z −1 , and ρ = Ω R /ω 21 . In Fig. 3, we show the dependence of the coefficients b n on the laser frequency parameter z. To estimate the effect of laser strength on the refractive index we take as the two-level model the two lowest states in the hydrogen atom, i.e. the 1S = | and 2P = | states separated by the transition energy ω 21 = 10.2 eV. The electric field interacting with this system is assumed to come from the neodymium glass laser ( ω 0 = 1.17 eV). Consequently, the frequency parameter y = ω 21 /ω 0 = 8.72 and corresponds to the regime of multiphoton excitation. Also, we take x = 0.3 for the light strength parameter. Since |r 21 | = z 21 = 4 √ 2 (2/3) 5 a.u., the taken x means the laser electric field amplitude ε 0 ≈ 0.017 a.u. corresponding to the laser intensity 10 13 W/cm 2 . This is the highest available intensity because above it the model becomes questionable due to the neglect of the possible upper-state ionization. For the assumed values of x and y, we find that both |b| and |c| are much smaller than 1 (precisely b = 5 × 10 −3 and c = 6 × 10 −4 from (16)) as required for applicability of Eq. (48). Since z = 1/y = 0.115 and ρ = x/y = 0.0344, the leading effect of laser strength on the refractive index comes from the term b 2 (z)ρ 2 in (48). The ratio of this term to the standard strength-independent term b 0 (z) is evaluated as b 2 (z)ρ 2 /b 0 (z) − 3 2 ρ 2 = −1.8 × 10 −3 for the taken frequency and strength of the laser field. Summary In this paper, we have worked with the appropriately linearized version of the exact, quadratically nonlinear Riccati-type differential equation for the ratio of the population amplitudes in a weakly laser-excited two-level system. First, we have analytically solved this linearized equation for arbitrary off-resonant laser frequencies and arbitrary smooth shapes of the laser pulse. Then, this solution has been used to derive an explicit formula for the laser-induced electric dipole moment in the system. Though complicated at the first sight, the formula has been shown to take substantially simpler forms in different laser frequency and laser strength limits. Also, we have used this formula to find some representative Fourier components of the induced dipole moment and discuss the dependence of the amplitudes of these components on the laser frequency and strength. To conclude, this paper shows that the linearized Riccati equation for population amplitudes is an effective analytical method for nonlinear optics of two-level systems weakly excited by a smooth laser pulse of frequency producing neither one-photon nor odd multiphoton resonances. The assumption of a smooth laser pulse physically means that, in the time evolution of the pulse electric field f (t) cos (ω 0 t), the full width at half maximum (FWHM) of the pulse shape function f (t) is much greater than the optical period T = 2π/ω 0 . It is well justified for all standard laser pulses except the recently celebrated few-cycle pulses. Under this reasonable assumption the time integral in Eq. (8) needs to be performed from the product of the slowly time varying shape function f (t) and the function cos (ω 0 t) exp ( i ω 21 t) which is fast varying as long as ω 0 substantially differs from ω 21 , i.e. in the absence of one-photon resonance. The absence of one-photon resonance practically means that the one-photon detuning |ω 21 − ω 0 | well exceeds the spectral width of f (t). In the present paper, we in fact were focused on this off-resonance case. The motivation for us was that in the majority of the ground-state atoms (except some alkali atoms), molecules, and ions the transition frequency to the first excited state, ω 21 , is higher or even much higher than a typical optical frequency ω 0 (see Sect. 4.4, for example) and multiphoton excitation is rather met. In this case of our interest, the integration in Eq. (8) was performed by parts and only the dominant term f (t) = t t0 cos (ω 0 t ) exp ( i ω 21 t ) dt was retained. The above assumptions of smooth pulse and absence of one-photon resonance have led us to the explicit R 0 (t), given by (12), valid for an arbitrary pulse shape function f (t). Thanks to this R 0 (t) we were then able to find general (22) for the small correction R 1 (t) applicable when no higher-order odd-photon resonance takes place in the two-level system. The case of any odd-photon resonance in the system is more cumbersome. If one-photon resonance (ω 0 = ω 21 ) is present, the starting Eq. (8) for R 0 (t) splis into two integrals, namely 1 2 t t0 f (t ) exp (2 i ω 21 t ) dt and 1 2 t t0 f (t ) dt . For the assumed smooth-shape pulse, the first integral can be approximated by the leading term, −i f (t) exp (2 i ω 21 t) /4ω 21 coming from integration by parts (see the beginning of Sect. 3). However, the other integral makes it impossible to find R 0 (t) in an explicit form valid for an arbitrary f (t). As a matter of fact, we are forced to choose a given shape for f (t) at this stage. For some shapes (e.g., the familiar f (t) = sin 2 πt tp ) the integral t t0 f (t ) dt is expressed in terms of elementary functions, while for other shapes (e.g., the Gaussian shape f (t) = exp −t 2 /t 2 p ) in terms of non-elementary functions (error function). Only in the first case, the small correction R 1 (t) could in principle be found along the analytical line similar to that presented in Sect. 3. Consequently, no general expression like Eq. (22) can be obtained for the correction R 1 (t) when one-photon resonance takes place. If a higher-order odd-photon resonance is present, instead of one-photon resonance, then R 0 (t) given by Eq. (12) is still valid. However, Eq. (18), being the integrand in Eq. (10) for R 1 (t), needs to be separated into its resonant and off-resonant parts with the use of the Fourier-Bessel expansions given by Eqs. (19) and (20). Then, the off-resonant part of Eq. (18) can be integrated over time in the same way as described before Eq. (22). On the other hand, the resonant part should be integrated on its own foot. Moreover, all processes damping the upper level in more realistic two-level system (e.g., spontaneous emission and ionization) should be taken into account in the case of any odd-photon resonance. Phenomenogically, this damping can be included to the model by adding the term −i γ 2 C 2 to the right-hand side of Eq. (3), where γ stands for the damping rate. As a consequence, the right-hand side of the Riccati Eq. (1) will be enriched by the extra term −i γ 2 R leading to appropriate changes in Eq. (8) for R 0 and Eq. (9) for R 1 . Thus, the above analysis shows that, at any exact and near odd-photon resonance, separate consideration of the two-level model would be necessary in the framework of linearized Riccati equation.
2020-08-06T09:07:40.503Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "e482849c439ae4ccb65efbd9d4d5a024526375d3", "oa_license": null, "oa_url": "https://doi.org/10.12693/aphyspola.137.1080", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "dbe92b6e6779fcd38e8de4a0e9088f46958eea72", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
220389024
pes2o/s2orc
v3-fos-license
Microarray analysis of differentially expressed microRNAs in myelodysplastic syndromes Abstract Background: Our study aimed to analyze differential microRNA expression between myelodysplastic syndromes (MDS) and normal bone marrow, and to identify novel microRNAs relevant to MDS pathogenesis. Methods: MiRNA microarray analysis was used to profile microRNA expression levels in MDS and normal bone marrow. Quantitative real-time polymerase chain reaction was employed to verify differentially expressed microRNAs. Results: MiRNA microarray analysis showed 96 significantly upregulated (eg, miR-146a-5p, miR-151a-3p, miR-125b-5p) and 198 significantly downregulated (eg, miR-181a-2-3p, miR-124-3p, miR-550a-3p) microRNAs in MDS compared with normal bone marrow. The quantitative real-time polymerase chain reaction confirmed the microarray analysis: expression of six microRNAs (miR-155-5p, miR-146a-5p, miR-151a-3p, miR-221-3p, miR-125b-5p, and miR-10a-5p) was significantly higher in MDS, while 3 microRNAs (miR-181a-2-3p, miR-124-3p, and miR-550a-3p) were significantly downregulated in MDS. Bioinformatics analysis demonstrated that differentially expressed microRNAs might participate in MDS pathogenesis by regulating hematopoiesis, leukocyte migration, leukocyte apoptotic process, and hematopoietic cell lineage. Conclusions: Our study indicates that differentially expressed microRNAs might play a key role in MDS pathogenesis by regulating potential relevant functional and signaling pathways. Targeting these microRNAs may provide new treatment modalities for MDS. Introduction Myelodysplastic syndromes (MDS) are a group of malignant clonal diseases originating from hematopoietic stem cells, characterized by an abnormal growth of hematopoietic cells, ineffective hematopoiesis, and a high risk of transforming into acute myeloid leukemia (AML). [1] MDS mainly occur in the elderly, as the incidence rate increases with age, and it has high mortality and low cure rates. Gene mutation [2,3] and chromosome abnormality [4] have been reported to be involved in the progression of MDS. However, its molecular pathogenesis and the exact mechanism of transformation into AML have not yet been fully elucidated. MicroRNA is a type of endogenous non-coding RNA of 19 to 25 nucleotides in length, which is completely or incompletely complementary to the 3'-UTR region of the target gene. Binding of a microRNA regulates gene expression at the post-transcriptional level by the degradation of its target mRNA or inhibition of mRNA translation. [5] Strong evidence suggests that microRNAs play crucial roles in the regulation of hematopoiesis. [6][7][8] Furthermore, a variety of studies have reported that differentially expressed microRNAs are associated with the transformation of MDS into AML [9,10] and clinical outcomes. [11][12][13] Ekapun [14] et al reported that DZNep (3-Deazaneplanocin A) could inhibit the expression of let-7b, leading to a decrease in the proportion of cells in the S phase in the MDS-L cell lineage. Recently, there has been a growing interest in microRNA microarray technology to profile microRNAs. MicroRNA expression profiling allows the identification of novel microRNAs associated with MDS pathogenesis. In the present study, we screened differentially expressed microRNAs in MDS and normal bone marrow using microRNA microarray technology and verified selected microRNAs by quantitative real-time polymerase chain reaction (qRT-PCR), to evaluate novel microRNAs that might be relevant to MDS pathogenesis. Patient samples Bone marrow (BM) was obtained in the department of Hematology, at The First Affiliated Hospital of Guangxi Medical University, Nanning, China, from 2012 to 2015. BM was extracted from patients at the time of diagnosis. MDS was diagnosed based on the WHO Recommended Criteria (2008), and patients were stratified based on the International Prognostic Scoring System. Patient characteristics are displayed in Table 1. Twelve normal bone marrow samples were obtained from healthy volunteers and donors who were free of any neoplastic disease. All the participants had been given informed consent according to the Declaration of Helsinki. The study was approved by the Human Ethics Committees Review Board at Guangxi Medical University, Nanning, China. RNA extraction We separated bone marrow mononuclear cells (BM-MNCs) using density gradient centrifugation. Total RNA was isolated from BM-MNCs of twenty patients and twelve controls using TRIzol reagent (Invitrogen) following the manufacturer's instruction. miRNA microarray and array data analysis Total RNA samples were analyzed by the miRCURY LNA Array (v.18.0) (Exiqon). [15] We imported scanned images into GenePix Pro 6.0 soft (Axon) for grid alignment and data extraction. Replicated microRNAs were averaged, and we choose microRNAs with intensities ≥ 30 to calculate the normalization factor. The expressed data were normalized using the median normalization analysis, after which a volcano plot was used to identify significantly differentially expressed microRNAs. A heatmap was created to display microRNA expression profiles of the samples. Statistically significant differentially expressed microRNAs were defined as P < .05 and jlogFCj> 1. Bioinformatics analysis The TargetScan database, miRandan database, and miRDB database were assessed to predict target genes of differently expressed microRNAs. Subsequently, these predicted target genes were integrated with the identified common DEGs of the GSE114869 and GSE107400 datasets to obtain potential target genes of validated microRNAs. Then these identified potential targets underwent gene ontology (GO) classification and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis for functional and signaling pathway analysis. P < 0.05 indicated statistical significance. Additionally, the String database (Available at: http://string-db.org), an online tool used for the structural and functional analysis of protein interactions, was designed to construct a PPI network of potential microRNA target genes. Statistical analysis The SPSS version 17.0 (SPSS lnc., Chicago, IL) was employed for statistical analysis. The Student t test (2-sided) was employed for comparison of 2-group parameters. P < .05 was considered statistically significant. Study design and analysis Twenty MDS patients and twelve healthy controls were included in the study. From these samples, 8 patients (aged 47 to 73 years, 5 males and 3 females) and 6 healthy controls (aged 46 to 61 years, 3 males, and 3 females) were used for the microarray study. Another twelve MDS patients (aged 38 to 67 years, 7 males and 5 females) and 6 normal controls (aged 41 to 52 years, 4 males and 2 females) were used for qRT-PCR validation. Identified differentially expressed microRNAs were validated by qRT-PCR. Differentially expressed mRNAs were discovered using the GEO dataset. Potential microRNA target genes were identified by a prediction algorithm analysis and exhibited differential expression in the GEO dataset. Subsequently, the potential microRNA target genes were subjected to bioinformatics analysis (Fig. 1). Identification of DEGs in MDS A total of 67,528 probes corresponding to 25,875 genes were identified in the GSE114869 and GSE107400 datasets. Statistically significant DEGs were indicated as P < .05 and jlogFCj > 1. Using Venny 2.0.2, we found 490 common DEGs between MDS and normal bone marrow in the GSE114869 and GSE107400 datasets (Fig. 4). The 490 common DEGs were used as identification criteria for potential microRNA target genes. Bioinformatics analysis We used the TargetScan, miRandan, and miRDB databases to predict the target genes of the 9 validated microRNAs. To improve the reliability of the predicted target genes, we intersected the predicted target genes with the identified DEGs to obtain the potential microRNA target genes. As a result, 96 potential microRNA target genes were identified (Table 2). To further evaluate the potential implications for these validated microRNAs, GO analysis was conducted to estimate the function of the identified target genes, including biological processes, molecular functions, and cellular components. When assessing biological processes, target genes were classified into 73 categories, including involvement in positive regulation of cell adhesion, inflammatory response, and hemopoiesis. For molecular functions, the results included receptor activity, protein homodimerization activity, and ATP binding. Finally, the cellular components mainly involved the external side of the plasma membrane, the integral component of the plasma membrane, and the cytoplasm (Fig. 5). KEGG pathway analysis revealed that potential microRNA target genes might play roles in hematopoietic cell lineage and cytokine-cytokine receptor interaction (Fig. 5). Additionally, we employed the STRING database (Available at: http://string-db.org) to create PPI networks for the 96 identified potential microRNA target genes. After removing the isolated and partially connected nodes, a complicated network of potential microRNA target genes was constructed (Fig. 6) Discussion In this study, we evaluated differentially expressed microRNAs between MDS and normal bone marrow samples using microarray analysis, a powerful technology widely employed to discover genome-wide expression variability of microRNAs. A total of 96 upregulated and 198 downregulated microRNAs were identified in MDS. Among the differentially expressed micro-RNAs in microarray results, ten selected microRNAs were also detected using qRT-PCR. The expression of 9 microRNAs (eg, miR-146a-5p, miR-151a-3p, miR-125b-5p) was consistent with microarray results, indicating these 9 micro RNAs might provide a significant contribution to MDS pathogenesis. The disaccord of miR-136-5p expression between microarray identification and qRT-PCR verification might be attributed to the false-positive results of the microarray. Moreover, we will conduct a large sample size for further study. miR-155 has been demonstrated to be dysregulated in different types of malignancies, such as cervical cancer, [16] breast cancer, [17] colon cancer, [18] gastric cancer, [19] as well as AML. [20] In cervical cancer, miR-155 promotes malignant tumor cell phenotypes through direct targeting of TP53INP1. [21] Additionally, miR-155 is overexpressed in AML and was identified as a potential biomarker for detecting AML. [22] Wang [23] et al. demonstrated that miR-146a can promote cell proliferation and suppresses cell apoptosis via the downregulation of CNTFR in AML and ALL. In another study, Spinello [24] et al found that miR-146a was remarkably elevated in AML and promoted leukemogenesis through targeting of CXCR4. In our study, miR-146a was upregulated with an approximately 2.79-fold change in MDS compared with normal, indicating that miR-146a may have a similar effect in MDS. Additionally, Lee [25] et al demonstrated that miR-221 was markedly overexpressed in AML. In an earlier study, Georgiantas [26] et al reported that miR-221 might serve as a myelopoiesis suppressor by inhibiting molecules involved in myeloid development. Similar to AML, MDS is characterized by myeloid development abnormalities. Therefore, we inferred that the overexpression of miR-221 might promote MDS progression via the inhibition of myeloid development. It has been suggested that miR-125b is also overexpressed and can inhibit myeloid cell differentiation in AML and MDS. [27] Consistent with the previous study, our study found that miR-125b was upregulated with an approximate 2.89-fold change in MDS compared with normal control. A variety of studies have demonstrated that miR-10a is overexpressed and correlates with an adverse prognosis in AML. [28,29] In addition, mir-10a also plays a key role in myeloid differentiation. [29] In our study, microarray results and qRT-PCR validation both revealed that miR-10a was significantly elevated in MDS compared with normal control, suggesting that miR-10a may have a similar effect in MDS. It has been reported that a higher level of miR-181a-2 is correlated with better clinical outcome in patients with AML. [30] Considering the prognostic role of miR-181a-2 in AML, we inferred that the downregulation of miR-181a-2 might contribute to the progression of MDS. Additionally, Wang [11] et al demonstrated that miR-124 is hypermethylated in MDS, and its hypermethylation is significantly correlated with an adverse prognosis. Likewise, we found that the miR-124 expression in MDS was significantly lower than normal control in our study. In this study, GO analysis showed that differentially expressed microRNAs were involved in biological processes, such as hematopoiesis, leukocyte migration, and negative regulation of leukocyte apoptotic processes, which have been reported to be major contributor to MDS pathogenesis, [31][32][33] indicating that the differentially expressed microRNAs may participate in the progression of MDS. The KEGG pathway analysis displayed an involvement of the hematopoietic cell lineage. Previous studies have shown that MDS is characterized by the abnormal development of 1 or more hematopoietic cell lineages. [34][35][36] This may explain how the differentially expressed microRNAs contribute to the progression of MDS. We created a PPI network of the potential miRNA target genes and discovered 45 closely related genes. Of the microRNAs verified in this study, miR-151a-3p and miR-550-3p have not been evaluated before in hematopoietic malignancies, but they have been demonstrated to play a critical role in other cancers. Latchana [37] et al reported that the expression of miR-151a-3p was significantly decreased in the plasma of metastatic melanoma patients after surgical resection, indicating miR-151a-3p may serve as an oncogene in metastatic melanoma. Similarly, Zhu [38] et al confirmed that miR-151a-3p was markedly elevated in metastatic renal cell carcinoma and might promote carcinogenesis by targeting MCL1. In our study, miR-151a-3p was also markedly elevated in MDS. Further experiments are required to explore the effect of miR-151a-3p in MDS. One study reported that miR-550a-3p was reduced in breast cancer and was associated with inhibition of disease development. [39] In this study, ERK1 and ERK2 were confirmed to be the target genes of miR-550a-3p. Downregulation of miR-550a-3p leads to the upregulation of ERK1 and ERK2. In a previous study, ERK1/2 has been demonstrated to contribute to the transformation of MDS into AML. [40] Therefore, we inferred that the downregulation of miR-550a-3p might promote the progression of MDS by upregulating ERK1/2. The potential mechanisms by which miR-550a-3p is involved with should be evaluated in MDS cell culture models in future studies.
2020-07-02T10:09:02.010Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "e65aeb69043f3a67b5ae321263a284b600e140fe", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1097/md.0000000000020904", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "baa5f75495fef808019a61e7f2454ddc116e6286", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
262750431
pes2o/s2orc
v3-fos-license
Non invasive subsurface imaging to investigate the site evolution of Machu Picchu The construction history of a site is partially preserved underground and can be revealed through archaeological investigations, including excavations, integrated with earth observation (EO) methods and technologies that make it possible to overcome some operational limits regarding the areal dimensions and the investigation depths along with the invasiveness of the excavations themselves. An integrated approach based on EO and archaeological records has been applied to improve the knowledge of Machu Picchu. The attention has been focused on the first construction phase of Machu Picchu, and for this reason the investigations were directed to the imaging and characterization of the subsoil of the Plaza principal, considered the core of the whole archaeological area. Archaeological records and multiscale remote sensing (including satellite, UAS, and geophysical surveys) enabled the identification and characterization of the first construction phase of the site, including the preparation phases before building Machu Picchu. The interpretative hypothesis on the constructive history of Machu Picchu started from the identification and use of the quarry, followed by the planification and set of the drainage systems and by the next steps based on diverse reshaping phases of what would be the central plaza. The geomorphological analysis (for additional details see Sect.C and Fig. S12 in SI) suggested that the Plaza Principal is located above a small catchment with its drainage basin placed between the two reliefs (Figs.1d,e, and 3).This catchment forms part of an impluvium furrow oriented as northwest-southeast, composed of granite and subordinately granodiorite blocks.Prior to any modification, this area comprised small basins with a relatively new drainage network 5 .The site was characterized by fractured granite bedrock as evolution in granite chaos (Fig. 3a-c) resulting from succession of intense precipitations.The considerable abundance of rock, also evident from the Electrical resistivity tomography (ERT) by Best et al. (38), showed in Fig. 3d, provided easily available building material. The geophysical investigations (see also Sects.B and C in SI) confirmed the geological and geomorphological assumptions on the original shape of the area (Fig. 3).Results from ERT and Ground Penetrating Radar (GPR) located the bedrock at a depth from 2.0 to 3.5 m below the current ground level (Fig. 3e,f).A rounded shaped basin (Fig. 3e,f) was identified from the ERT and this well fitted with the less resistive layers (attributed to the granitic chaos resulting from weathering processes) located between ~ 65 m and ~ 105 m (Fig. 3d) in the more extensive but less detailed (Fig. 3d) survey by Best et al. 17 The constraints related to "Phase 0" are the: • presence of coarse material, including granitic chaos 2 , suitable for both filtering and stabilizing foundation; • gathering and disposal of rainwater, regularization of the bottom level, and the terrain alignment for future construction. The advanced Inca hydraulic and geotechnical engineering is clearly evident in the transition from Phase 0 to Phases I and II. Phase I: the quarry The first transformation of Machu Picchu is characterized by the quarry activity that reshaped the drainage basin.The consensus is that the quarry was dispersed rock material resulting from erosion between the peaks that survive to the northeast of Plaza Principal and east of Sacred Plaza.Geophysical imaging highlighted that, below the Plaza Principal, the bedrock is characterized by irregular jagged and indented shapes (Fig. 4a,b) thus suggesting the presence of loosely attached large blocks (see also Fig. 8d,e) typical of natural or manufactured fractured rock complex (an ancient rip rap type processing).In the ERT and GPR maps, these areas are identified by interruptions of resistive deep surfaces and reflectors (denoted with dashed red box in Fig. 4c,d), respectively.The granite rocks' jagged and fractured morphology is also visible from the georadar profiles (see Fig. 4e) which evidence the presence of sub-vertical surfaces of the granite rocks, related to quarry extraction. The geoelectrical depth slices (see Fig. 3f along with Sect.B.1 in SI) provide additional morphological and dimensional details on resistive elements related to the granite blocks.At 2m, less resistive areas with irregular presence were found.These spaces could be associated with the extraction of granite blocks (probably already fractured) and incoherent rocky material.Accordingly, the GPR radargrams (see Fig. 4c) exhibit local reflectors The geophysical surveys found an irregular and complex topography of the bedrock, which corresponds to a heterogeneous soil filling up to the current level of the square.This spatial variation of fill produces changes in moisture content and vegetation growth (the typical crop-marks 18,19 ; see also Sect.A in SI), clearly visible from the multispectral remote sensing data (Fig. 4f; and Figs.S4-S6 and Sect.A.3 in SI). Results from satellite-based analyses reveal a large crop mark (40 × 32 m, approximately), caused by the differences in fill depths, particularly evident in the NDVI (Normalized Difference Vegetation Index) map (see Fig. 4f; Table S1 in SI).Higher NDVI values (compared to the neighboring areas) are related to greener or healthier vegetation due to higher moisture and terrain depth.These variations in vegetation growth and soil moisture are also visible from the magnetic susceptibility survey in Fig. 5e 20 . The UAS-based maps show additional smaller crop marks probably related to the reorganization of the quarry over time, later transformed into the Plaza Principal (Sect.A.3, Figs S4-S5b in SI). The reconstruction in Fig. 3g,h recreates the manner that quarrying modified the catchment basin. Preparatory phases: the drainages systems The shape of the catchment basin was modified and filled to create the stable foundations for the Plaza Principal. For the Incas, the priority was to design a water drainage system to avoid water infiltration.For example, the Temple of the Sun was affected by deformation and collapses.The GPR 3D image (see Fig. 5b) shows the concavity shape of the bedrock (see also Fig. 8a) which highlights the excellent Inca engineering techniques for the water flow management.The Incas were fully aware of the destructive power of uncontrolled water, so its proper management was always one of the first characteristic elements of the Inca construction.The control of water resource was not only essential on a practical level, but symbolically represented a manifestation of political power [21][22][23] .The satellite NDVI map highlights some crop marks (c1, c2, and c3, in Fig. 5a) which help to identify potential drainage collectors (characterized by darker tones, corresponding to lower values of NDVI).The GPR sections topographically corrected using the DTM from UAS photogrammetry show a gentle slope from NE to SW (as shown in the radar section X-X′ in Fig. 5c).In particular, the slope in X-X′ section is around 2.6% at the square level (Fig. 5d) and 6% along the radar reflective layer indicated with light blue and red color in Fig. 4c.This morphological condition represents a good solution for the drainage of surface water.Moreover, the radar section X-X′ also puts in evidence two strong local reflectors (A and B, highlighted by dashed yellow rectangles in Fig. 5c), that interrupt the shallow reflecting layer (indicated with light blue arrows), referable to the presence of drainage structures common in Machu Picchu. Below this level, there is another reflective layer at a depth greater than 2 m (see red dashed lines in Fig. 5c), interrupted by local reflections reasonably due to natural fractures and/or quarry cuts, defining a 'two-level' (anthropogenic and natural) drainage system.The latter guided the water into the large central drainage basin in the NE/SW direction to avoid an excessive and dangerous water load near the north and south walls.To facilitate the evacuation of water, the area was re-shaped (regularizing the bed of the quarry) and filled using stone/waste material, silty sands with gravel, and sandy silt, as confirmed by two archaeological trenches (see Fig. 2a-c; "Introduction"). The Plaza in the light of archaeological data After the reshaping of the hydrographic basin (by quarrying) and the construction of the drainage systems, the Incas' efforts were addressed to build a space for ceremonial activities.Different phases of soil re-filling and compaction were identified combining geophysical results with the archaeological data 19 , so that the main question to answer is: does each filling layer only a construction phase or correspond to a phase of attendance of the Plaza? To answer this question, we combined the archaeological records from units UE13 and UE25 19 (Figs.1f, 2a-c) with the results from GPR (Fig. 7). UE25 was excavated to define the original position and dimensions of a sacred monolith, in Quechua known as wanka (previously excavated for restoration and reburied 24 and extremely important because in the Inca worldview, it provided for with ceremonies and offerings.This wanka is in the central part of the Plaza Principal and stands as a testimony to the ceremonial nature of this space. The excavations revealed two layers at progressive depths of 18 cm and 55 cm (I and II in Figs.5c, 6b), the expected monolith (horizontally lying), charcoal, and several ceramic sherds from vessels associated with ceremonies, thus confirming that the Plaza was mainly used for ritual activities.From the last excavation level four small probes of size 1-2 m (named 1, 2, 3, 4; see Figs. 2d-f are located at progressive depths of 80 cm, 94 cm, 1.49 m and to 2.40 m.The last layer was composed of lithic fragments, residues of quarrying and stone cutting activities, placed to fill the fractures and interstices between granite blocks 19 .The fragments packed around the foundation stabilized the monolith in an upright position.Three GPR sections conducted on the center of the Plaza Principal (F6, F18, F27, see Fig. 6) exhibit two reflective surfaces, named r1 and r2.The first one, almost horizontal (highlighted with red dashed lines) is 0.80-1.20 m in depth.The second deeper layer (highlighted with orange dashed line) is characterized by a curved shape in the middle and two horizontal sections at both ends, following the form of the underlying catchment area.Below it, several local reflectors (marked with red arrows in Fig. 6) are visible.The comparison between F18 radargram (crossing unit UE25) and the archaeological layers highlights a correspondence between the georadar reflective surface r1 and the interface between the archaeological layers III and IV.The top of the archaeological layer VI (at a depth of 2.40) is close to the georadar reflective surface r2, at a progressive depth of 2.90 m, reasonably caused by the granite bedrock.The difference of half a meter between the top of layer VI and r2 is probably due to a layer of pebbles and lithic fragments.Both the georadar and archaeological layering suggest that at least two human occupation phases characterized the central part of the Plaza.In summary, the only significant cultural layer closely related to the GPR data were lithic elements from monolithic processing, but they were probably not closely related to terrace construction.Only the sand fill may have played a stabilizing role, which is clearly illustrated in probes in form of trenches (calas) 1, 3 and 4 in Fig. 2d-f.Unit U13, located on the SE edge of the Plaza, revealed two layers (I e II, at progressive depths of 25 cm and 50 m, respectively) composed of silt mixed with gravel and cultural material including decorated ceramic fragments, circular pendants, and stone hammers.To establish the stratigraphy, two probes p1 and p2 were done.The latter, 1.65 m deep, revealed three layers (III to V, in Fig. 6) located at progressive depths of 90 cm, 1.32 m and to 1.65 m, made up by gravels and pebbles for the drainage of rainwater runoff towards the surface of the deeper rocks.This suggested that before building the Plaza, the Incas stabilized the lower platforms with particular attention to water drainage.Such use is clearly evident in the stratigraphic profile of Fig. 2d-f where the NW probe in layer V is characterized by a fill of fine lithic material.This is a structural element of the terraces and evidently the last layer (after the sand layer) which helped control any hydrological movements.Comparing radargram F18 with the archaeological layers, it is possible to observe a correspondence between the georadar reflective surface r1 with the interface between the archaeological layers III and IV, and between the georadar reflective surface r2 with the top of the archaeological layer V. Therefore, comparing the archaeological stratigraphy with georadar reflective layering, it is possible to argue at least three soil filling phases. The question is, are these filling phases only designed to set soil platforms to ensure adequate geotechnical and drainage characteristics of the Plaza, or do correspond (at least 2 out of 3) to diverse phases of attendance of the Plaza.In other words, is there a more ancient plaza under the current one? From the georadar sections F6 and F18 it is possible to observe that the deeper reflective layer r2 has a mixtilinear shape with a concavity in the center and two horizontal planes.This shape may be a result of the reshaping of the granite bedrock of the water catchment during the quarrying phase.The regular shape of the reflective layer is also due to the presence of crushed stone, lithic fragments, and pebbles placed to fill the fractures and spaces between granite blocks. The presence of cultural material at various depths, among which are very deep and very close to r2, along with the integration of archaeological and geophysical data, suggest at least two phases of construction of the Plaza Principal. The first phase could be related to a so-called sunken plaza (plaza hundida in Spanish) below 2.5 m from the current surface and smaller than the Plaza Principal.This type of Plaza is usually bounded on all four sides and set below the level of the surrounding andenes.Similar spaces are found within the park at Chachabamba, Phuyupatamarca, or Qantupata where they are interpreted as having a ceremonial function. Subsequently, the filling process of the Plaza Principal continued, reaching its current size and shape.In this respect, another question arises: did this filling process occur in a single phase or in more than one?The presence of a strongly reflective surface (at a depth of about 1 m) and the presence of cultural material indicates that between the plaza hundida phase and the current Plaza Principal, there was an intermediate construction phase.The georadar profiles and the geomagnetic maps (Fig. 7) identified various fill phases in the southeast of the Plaza Principal, in sector D1 (see also Fig. 1e,f).GPR evidences the presence of a step of the andenes covered with earth to create a small plaza (Fig. 7c,d).This multi-stage construction process reveals an approach to the creation of the ceremonial space, as will be explained in more detail in the discussion ("Discussion"). Discussion Two stratigraphic trial trenches, conducted in the Plaza Principal (see "Introduction", "The Plaza in the light of archaeological data", Figs.1f, 2a,b, Fig. 6; along with Figs.S13-S15 in SI), opened new research questions about the construction phases of Machu Picchu.To contribute to answer these questions, non-invasive EO surveys were conducted in the entire Plaza Principal and its adjacent andenes thus revealing various construction preparatory phases. From the drainage basin to quarry The surveys highlighted the presence of an impluvium (Fig. 3g), first identified by crop marks from the satellite and UAS imaging, and later confirmed by the geophysical prospections.The integration of results from diverse remote sensing technologies documented the existence of a watershed ("Phase 0: the drainage basin"), oriented in the EW direction (with a maximum depth of around 3-3.5 m).The integration of GPR and ERT imaging allowed the estimation of the granite bedrock depth and the characterization of its shape.The 3d model, generated by GPR (Figs. 5b, 8a), along with the georadar and ERT sections (Fig. 4a-d) confirmed the presence of a buried drainage basin along with the jagged surface of the bedrock resulting from the quarrying activities (see "Phase I: the quarry").The signs of the ancient quarrying are still today visible on numerous large blocks as, for example, those set along the terrace wall overlooking the Plaza Principal and those close to the Main Temple and the Priest's House in Sacred Plaza (in Fig. 8b,c).Moreover, several shaped big block stones emerge along with rocky bodies (part of the bedrock of the catchment area) fully integrated into the retaining structures of the andenes (Fig. 8d,e). The Plaza Principal as a work in progress: from natural catchment area to the Plaza in two phases The EO-based results point out that, despite the well-thought-out architectural plan following a relative constructive coherence, the Plaza Principal of Machu Picchu was subsequently modified (see "Phase II: The Plaza").The Plaza Principal had undergone several changes likely to accommodate larger public gatherings. The GPR survey (E1, E2, E3) (see Fig. 7) (D1) revealed that the Plaza Principal was developed in two constructive phases as evident for the southeast area. The first phase was related to the setting of a plaza hundida, namely a sunken plaza.This architectural feature was a common ritual space in the Inca sites, sometimes connected to ritual baths as in the case of Chachabamba 23,25 . The second phase consisted of the extension and elevation of the Plaza Principal made to expand the area for ritual and social activities along the NW-SE axis, above the originally hydrographic basin. As a whole, around the 60% of the construction efforts were needed to reshape the water catchment for the drainage system 6 .It is widely recognized that the Incas were masters in hydraulic engineering, particularly in water conveying and management systems 26 .The llaqta of Machu Picchu is undoubtedly an outstanding example of the Inca achievements in the design, construction, and management of surface and underground drainage systems.The system remains in use today and is effective in preventing water logging, soil erosion, and collapse www.nature.com/scientificreports/ of walls 6 .This stable foundation is the primary reason why the temples, buildings, and agricultural terracing systems remain standing even after centuries of abandonment and heavy rainfalls. The architectural evidence has helped scholars to understand the efficiency of water runoff and drainage systems 6 .Less visible and understood are the subsurface infrastructure.The integration of the archaeological data with the results of geophysical and multispectral imaging advanced our knowledge and contributed to formulate some hypotheses on the diverse building phases. Results from satellite, UAS, and geophysical surveys provided evidence of a buried drainage system which exploits the sloping soil layers and bedrock (detected by GPR) to direct the waters towards the southeast side of the Plaza Principal (Figs. 5d, 6c).This hypothesis is confirmed by the excavations that revealed the overlapping of the diverse stratigraphic levels characterized by different granulometry and consistency (sandy silt, silty sands, and silty sands with gravel), devised to increase the permeability (see "Introduction", and Figs.1f, 2a). The Incas used to exploit the natural capability of a basin to drain water, maintaining its effectiveness even in the dry season.This has been conceptualized and modeled by Fairley 27,28 based on geologic water storage.The Incas used to manage an aquifer system building a wall across the former discharge boundary.This way the exiting water was forced to be stored close to the wall and then conveyed to a single drain.The water system was adequately controlled and channeled, and the water was leveraged for multipurpose functions 6,11,21,22,29 including the ceremonial activities. The capability to control the water flow was considered an evidence of the divine nature of the Incan Emperor.For this reason, numerous Inca hydraulic structures related to water management were conceived and realized as monumental or ceremonial architecture, as, for example, the exceptional water structures of Ollantaytambo 9,30 , Tambomachay 31 , Pisac 32 , Sacsayhuaman 33,34 in the Cusco and Valle Sagrada area.Moreover, there are numerous well-known examples of the use of hydraulic architecture for ceremonial and, by extension, for political purposes in diverse sites, as in Tawantinsuyu, the Inca Caranqui 35 , and Ingapirca 36 , in Ecuador, Namachuco devoted to Apu Catequil 37 or the most emblematic example in Saywite 38 . In Machu Picchu, the drainage system of the Plaza Principal was made likely to drain the andenes system up to the north (Fig. 9a, F) and south (Fig. 9a, D1, and D2).This hypothesis was corroborated by archaeological evidence from the east side of Plaza D2, where a tunnel was built to transport the water to the Condor Temple (Fig. 9b-e), set over the contours of the rocks and characterized by large stone seen as the representation of a condor.The Temple of the Condor is a complex of buildings which include caves used for ceremonial activities.South of the Condor complex there are several buildings with privileged access to the water from sacred bath system as common in the Inca sacred areas.One of these unique baths is located right next to the Temple of the Condor (f) and likely in the past connected to the system of water draining.The water management system was developed in two constructive phases (see Fig. 10), as the Plaza Principal.It is worth to mention an important finding related to the construction of water systems in the urban sector.The Incas planned to replace the segment of the water supply channel located in the Urban Sector originally built with irregular stones joined with clay.The aim was to replace the old structure with around fifty new lithic elements that were found scattered in the 7th platform of the Agricultural and Urban Sectors.This modification would have prevented water infiltration that would affect the structures of the Temple of the Sun complex 27 .This clearly shows that the Inca were aware of the problems of water infiltration, and able to change designed plans to address unexpected issues. Like many Andean cultures, the Incas understood and sought to control natural phenomena, such as water with innovative hydraulic and environmental engineering techniques.In addition to these practical considerations, the control of water was presented as political and sacred power (29). Conclusions Machu Picchu with its associated sites has long puzzled scientists for many reasons, as, its location, the highly sophisticated Inca capability of adaptations, the hypothesis that it was never finished. The herein devised non-invasive investigations enabled the reconstruction of the first building phases including the initial preparation one.Multiscale and multisensor EO techniques (including geophysics) documented the anthropogenic layering of the subsoil, thus allowing the recreation of initial pre-construction setting and unveiling the Machu Picchu environment before the construction that we know nowadays.This area first served as a quarry, subsequently reshaped and then secured through adequate drainage systems (see Fig. 10). As a whole, the devised non-invasive analyses. (i) enabled the identification and characterization of the diverse phases of the construction site; (ii) revealed that the Plaza Principal was developed in two constructive phases, the first phase was related to the setting of a plaza hundida, namely a sunken plaza and the second phase related to both the extension and elevation of the Plaza Principal to expand the area for ritual and social activities; (iii) unveiled the buried drainage systems adopted for the andenes as for the whole site to drain water and to prevent structural collapses.The drainage systems are still effective today, as evident by the fact that the abandoned site remained stable for centuries without maintenance; (iv) improved our understanding of the Inca's capability to confront geomorphology and hydrogeological hazards with highly sophisticated and effective environmental engineering interventions fully integrated with nature and the sacred landscape, result of a local evolution of more ancient contruction cultures, including the Tiwanaku one 9,10,41 .Some examples of the Incas achievements are evident in the drainage systems, still effective today, and in the like terraces (andenes) made as wide steps to stabilize the site, whose slopes exhibit debris accumulation as a result of past and present landslide activity 42 , and efficiently designed reshaping the gradient of the slopes for several functions: (a) for risk mitigation, protecting from uncontrolled runoff and hillside erosion, (b) for agricultural purposes to gain land for food production, and also (c) as a complementary part, for the most important ceremonial constructions.Incas were certainly the first experimenters and users of Nature Based solutions for risk mitigation purposes. Methods This section explains the methodological approaches used for the purpose of our investigations and additional details are in the Supplementary Information (SI). Results from non-invasive multisource prospections were coupled with archaeological records in order to identify and characterize the diverse phases of the site-transformation and arrange a relatively complete picture of the construction process.Findings from the archaeological excavations facilitated the interpretation of the results from EO surveys which provide a broad subsurface imaging of the Plaza principal (the original core of the whole archaeological area). Five complementary survey methods were used to investigate the subsoil at different depths (see SI): multispectral imaging from (i) satellite and (ii) UAV to identify and map the presence of buried structures or pits and ditches through archaeological proxy indicators visible in the surface; (iii) electrical resistivity tomography (ERT) to characterize the electrical behavior of the subsoil up to 10 m; (iv) Ground Penetrating Radar (GPR) to detect and image objects, bodies and anthropogenic layers reflecting electromagnetic waves, up to an expected depth of 2 m; (v) magnetic surveys in a gradiometric mode to detect and map variations of the magnetic earth's field referable to any anthropization processes. Several advantages are expected using different survey methods, as: (i) overcoming the intrinsic limitation of a single method including effectiveness, time, and cost for the acquisition, (ii) performing investigations at diverse spatial scales, (iii) sensing the subsoil at different depths, thus facilitating the archaeological interpretation. Satellite and UAS data set The Very High Resolution (VHR) satellite data set, used for the purpose of our analyses, was made up of multitemporal, multi-sensor, multispectral images.The UAS survey was carried out employing a DJI Phantom 3 Professional, equipped with the owner RGB camera and with a Parrot Sequoia multispectral camera.The acquired images were radiometrically corrected thanks to the use of a Parrot Sequoia reflectance panel, captured before and after each flight.Finally, in order to work on a GIS basis with all data from remote sensing (drone and satellite) and geophysical data, several ground control points (GCPs) and ground validation points (GVPs) were surveyed with a high-precision GNSS.These points were then used (i) for the correction of the photogrammetric processes and (ii) for the correct georeferencing of the datasets (process described in SI and Fig. S3). Satellite and UAS data processing The data set, acquired from both satellite and UAS survey, was processed following the flowchart in Fig. S1 in SI, devised to extract information and make comparable the results related to the different spatial scales (0.3-0.5 m for satellite, and 0.04 m for UAS).For each year, the multi-band images were processed to compute spectral indices (formulas are listed in Table S1 in SI) to enhance archaeological features (see also [43][44][45][46][47][48][49][50][51] ). Electrical resistivity tomography Electrical resistivity tomography (ERT), is a geophysical method based on the imaging of the electrical resistivity distribution within the subsoil by injecting a current into the ground and measuring the related potential drops 52 . The ERT surveys were carried out using Dipole-Dipole (DD) and Wenner-Schlumberger (WS) acquisition schemes; the former for its ability in detecting lateral resistivity variations and the latter for its higher signal-tonoise ratio and for its sensitiveness to vertical discontinuities 53 .DD and WS data were collected in both direct and reverse mode.This last mode is based on the "reciprocity principle" 54 and consists in inverting the position of the current and potential electrodes. The geoelectrical data were inverted using the commercial software RES2DInv.A synoptic view of ERT results is in Fig. S7 (for additional detail see Supplementary Information and 55,56 ). GPR investigations The GPR exploits radar pulses to image the subsurface and using antennas with different operating frequencies, the method permits an adequate resolution and depth of investigation for the most common archaeological applications [57][58][59] . The GPR data were acquired using the system TH Dual-F Hi-Mod (IDS), equipped with a multi-frequency (200 and 600 MHz) antenna.The presence of obstacles, as megaliths, stones, irrigation pipes was considered for supporting the interpretation. Raw data were processed using the following processing chain (shown in Fig. S8 in SI): a. Time gating for removing the reflections due to the air layer between the antenna and the subsoil surface; in this way, direct waves effects were deleted.www.nature.com/scientificreports/b. Background removal to remove the background noise.To this purpose an average trace is calculated for the entire radargram and then subtracted to every single GPR trace, sample by sample.c.Signal gaining with ACG filter to provide a time-varying enhancement of signal amplitudes.The filter performs a subtraction between the average amplitude of a signal in a well-known time-window and the maximum amplitude of the overall trace.To this aim the time window chosen was equal to 70 ns and 30 ns for the 200 MHz and 600 MHz data, respectively.d.Band-pass filtering to remove the noise due to non-coherent loss of the signal able to limit the signal to noise ratio and the surrounding media.The filter works within the frequency domain and acts on each trace independently.For the data acquired at the nominal frequency of 200 MHz, only the signal included between 75 and 350 MHz was considered.e. Kirchhoff-migration for the time-depth conversion, performed after the evaluation of the characteristics of the subsoil.To this aim the velocity estimated was equal to 0.07 mn s −1 .f. Normalization of the amplitude (performed on the mean amplitude value of the complete profile) to de-clip saturated traces using a polynomial interpolation (for additional details see SI and [60][61][62][63][64] ). Geomagnetic prospections The geomagnetic method (MAG) is based on the mapping of local variations of the Earth magnetic field resulting from changes of magnetic properties of the underlying rocks or from the presence of buried artifacts within the subsoil 65,66 . The MAG data were acquired in gridded areas of various sizes (ranging from 20 × 20 m to 40 × 40 m) using survey procedure that are standards in archaeological prospection.Calibration was performed on-site prior to the acquisition through an automated procedure which corrects possible misalignment in the sensor measurements. Standard processing procedures, using signal and image processing techniques, were applied and the magnetic data were rendered as an image.Vertical gradient maps were produced applying a minimum curve interpolation ("spline") to smooth. Archaeological records In 2016 and 2017 two excavation units were performed 14,19,24 , in the areas shown in Fig. 1f and labeled The two excavation units presented even limited but significant findings related to the construction phases.To extrapolate these results across the entire Plaza, non-invasive Earth Observation (EO) surveys were conducted (for additional detail on EO data integration see [67][68][69][70][71][72][73] ). . Figure 1 . Figure 1.(a-c) Southern America and Peru: geographic and geological location of Cusco and the llaqta of Machu Picchu settled on the Eastern Cordillera of the Andes and surrounded by the Western Cordillera, the Plateau (where Cusco is located), Amazon plain, and Sub-Andean area.(d) GeoEye satellite-based map of the llaqta of Machu Picchu which shows the main sectors of the site: the agricultural sector at the South and the urban sector in North, divided into two subsectors, the Hanan (to the west) and Hurin (to the east) separated by the 'Plaza Principal' .(e) Zoom of image 1d focused on the Plaza Principal (1e); (f) 3D model obtained from the UAS-based aerial photogrammetry.In (e,f), the letters indicate Intihuatana (B1), Sacred square, including the Main Temple and the Three Window Temple (B2), the building complex known as the 'Three Portals' (C1), residential area (C2), the Condor Temple (C3), Plaza Principal (E1, E2, E3), some terraces known as andenes (D1) and second Plaza (D2): Northwest andenes (F).UE25 refers to the two excavation units in Plaza Principal. Figure 2 . Figure 2. (a) Detail of the Plaza Principal with the location of UE25; (b) Detail of UE13; (c) Detail of UE25; (d) stratigraphic profile of UE13 with all trenches marked and (e) details for the probe 02 (cala in spanish) of UE13; (f) stratigraphic profile of trench 01 of UE25; (g) longitudinal cut of trenches 03 an 04 of UE25.(d-f) (credits for PIAISHM archives) The name UE comes from the Spanish name Unidad de Excavación (Archaeological Unit) for which the abbreviation is UE.It means a single excavation unit, or a single area subjected to excavation.As we did not want to interfere with internal terminology at any stage, we decided to use the same nomenclature to avoid problems around naming the same phenomena. Figure 3 . Figure 3. (a,b) Evolution of granite chaos in two phases: in the first (a) rainwater penetrated through fractures and faults, and in the second (b) rainwater and gravity separated the granite blocks, thus forming the granite chaos (5); (c) Outcrops of granite chaos, at the southwest side close to the Hanan sector; (d) ERT profile by Beck et al. 17 crossing the hilly reliefs of Hanan, Hurin and the Plaza principal.(e) Bedrock surface reconstruction as derived from the GPR survey; (f) Geoelectrical depth slices at z = − 0.2 m, Z = − 1.0 m and z = − 1.9 m; (g) virtual reconstruction of the drainage basin. , 6b) were placed to understand the stratigraphy of the Plaza and identify other cultural phases (for additional details see SI, Sect.D, Figs.S14-S15).Probe 1 revealed four layers (III to VI, in Fig.6d) characterized by silty-sandy soil with different colors and types, whose top surfaces Figure 4 . Figure 4. (a)-GPR section p1; (b) ERT section p1; (c) GPR section p2; (d) ERT section p2; (e) location of sections p1 and p2; (f) satellite NDVI map with crop-marks; (g,h) virtual reconstruction from the phase 0 (the drainage basin) to the phase I (the quarry).The yellow arrows in (3a) indicate the radar reflective surface.The same arrows have been superimposed on the ERT profile (3b), highlighting that the reflective surface roughly matches with the top edge of a resistive body related to granite bedrock. Figure 5 . Figure 5. (a) Satellite based NDVI map which puts in evidence three crop-marks c1, c2, and c3 related to spatial variations of soil filling; (b) GPR imaging overlaid on the 3d model obtained by UAS based photogrammetry; (c) GPR section X-X′ crossing the Plaza Principal along the NE-SW; (d) Topographic section X-X′; (e) Magnetic map. Figure 6 . Figure 6.(a) Plaza Principal with the location of three GPR sections (F6, F18, F27) and the excavation units UE25 and UE13.(b) Excavation units UE25 and UE13: maps with the location of the probes, and a detail of layer 5 of probe 2 of UE13.(c) Radargrams F8, F13, and F27.Red and orange dashed lines denote two reflective surface r1 and r2, respectively.Red arrows indicate some local reflectors below r2, referable to granite rock bodies.(d,e) Stratigraphy of probe 1 of UE25 and probe 2 of UE13, respectively.Orange arrows indicate the presence of cultural material found by the archaeologists.(f) Zoom of radargram F18 aimed at compare the archaeological layers with GPR reflective layers. Figure 7 . Figure 7. Plaza D1.Geophysical results revealing a two-step construction phase.(a) Magnetic map; (b) GPR depth-slice map at 1.60 m depth; (c) location of the two radargrams F01 (d) and F02 (e) that reveal the two distinct and overlapped construction phases. Figure 8 . Figure 8.(a) Reconstruction of the jagged concave bedrock characterizing the Plaza Principal obtained with the picking of the GPR reflections imputable to the batholiths.(b,c) Signs of quarrying activity visible on granite blocks of the Main Temple (b), and Priest's House (c) (photo by N. Abate).(d) Plaza Principal: red box indicates a rocky body of the bedrock emerging above the ground level of the Plaza; (e) zoomed detail of the rocky body seen in 7d (photo by N. Masini). Figure 9 . Figure 9. (a) Hypothesis on the underground water drainage system.Light blue arrows indicate the water flow direction; the red circle denotes the architectural complex of Temple of the Condor towards which part of the water drained south of the Plaza Principal is conveyed.(b-f) Detail of the Temple of the Condor.(b) and (d) show a tunnel which in the past conveyed the water towards the Tempe of Condor.(e) Detail of a canal and a rock of the ritual space of the temple.(f) Bath next to the Temple of the Condor. Figure 10 . Figure 10.(a-c) Reconstructive hypothesis of llaqta of Machu Picchu during the preparation phases of the site: from the water catchment (a) to the quarry (b), up to the Plaza Principal, in turn built in two phases, the first relating to the plaza hundida (c).Finally, image (d) depicts the last configuration of the Plaza, and, all around, the andenes, the buildings, and the temples. https://doi.org/10.1038/s41598-023-43361-x as UE25 (11.70 × 5 m) and UE13 (6 × 6 m) (see also Fig. 2a,b).These excavations revealed layers of different soil types and colors representing two distinct construction phases: 1. the stabilization of the lower platform by filling and setting water drainage systems, and: 2. sculpt the present stepped (see additional information see Graphical summary in Figs.S13-S15 in SI).
2023-09-27T06:17:53.331Z
2023-09-25T00:00:00.000
{ "year": 2023, "sha1": "6afb7a9cf64a3e5eeae9d9ca95bb29ef0fa46467", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-43361-x.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "99b3165853c1036d7ecb05d6e51b1e38df3f8fea", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
232365620
pes2o/s2orc
v3-fos-license
Pseudomyogenic Hemangioendothelioma Involving the Esophagus: A Case Report Herein, we describe the case of a 20-year-old woman who presented with dysphagia of 2 months’ duration associated with vomiting, moderate abdominal pain, decreased oral intake, and significant weight loss. During the past 3 years, the patient experienced intermittent mild abdominal pain with infrequent vomiting. Endoscopy at Jordan University Hospital showed a mass in the esophagus, and endoscopic biopsies were performed. The preliminary histopathological report excluded malignancy. Two days after endoscopy, the patient presented to the emergency department complaining of severely worsening pain and total dysphagia. The pain persisted despite intravenous paracetamol administration, which was concerning for esophageal perforation; therefore, an urgent surgical intervention was performed. The mass was removed surgically, along with a para-esophageal lymph node. The final histopathological results of the endoscopic and resected specimens supported the diagnosis of pseudomyogenic hemangioendothelioma (PMHE). This is the first case reporting esophageal involvement of PMHE. A 20-year-old female patient presented to gastroenterology clinic in Jordan University Hospital, complaining of progressive dysphagia of 2 months' duration, associated with nausea, vomiting, moderate epigastric pain, decreased oral intake, and significant recent unintentional weight loss of 10 kg. The patient's appetite was normal. The physical examination was remarkable for epigastric tenderness and cachexia (weight, 40 kg). During the past 3 years, the patient experienced intermittent mild abdominal pain with infrequent vomiting. She was followed at a private clinic (outside our university hospital), and 2 previous upper endoscopies were done, without proper documentation of the endoscopic or pathologic findings. On both occasions, she was diagnosed with benign distal esophageal ulceration due to gastroesophageal reflux disease and was treated with a proton pump inhibitor with partial improvement. A few days after her visit, endoscopy at Jordan University Hospital showed an obstructive mass, extending from 28 to 33 cm from the incisors. The mass was polypoid, fungating, friable, and ulcerated (Fig. 1A). Multiple endoscopic biopsies were performed. The clinical impression was in favor of a benign mass due to the long duration of symptoms (3 years). A Gastrografin swallow study showed esophageal dilatation, with a large filling defect involving the lower esophagus (Fig. 1B). Abdomen and chest computed tomography showed esophageal luminal obliteration by the mass, diffuse esophageal wall thickening, hiatal hernia with adjacent necrotic para-esophageal lymph nodes (the largest of which measured 8 mm along its short axis), and para-aortic pathologic lymph nodes (Fig. 1C). The bone scan showed no foci of abnormal uptake. The preliminary histopathological report excluded malignancy, but was not conclusive. Two days later, the patient presented to the emergency department complaining of severely worsening abdominal pain and total dysphagia. The pain persisted despite intra-venous paracetamol administration, which was concerning for esophageal perforation. Therefore, an urgent surgical resection of the esophageal mass was performed, even without a final histopathological diagnosis. Since the preliminary histopathological report excluded malignancy, we decided to manage the patient with distal esophagectomy without para-aortic lymph node dissection. Distal esophagectomy with gastric pull-through was performed through a left thoracoabdominal incision. The esophageal mass was located 3 cm away from the proximal resection margin, and 2.2 cm away from the distal resection margin. A para-esophageal lymph node around the distal third of the esophagus was also resected. A few days later, final histopathological results of the endoscopic and resected specimens supported the diagnosis of pseudomyogenic hemangioendothelioma (PMHE). The histopathological examination showed an epithelioid neoplasm with multiple foci of necrosis. The tumor cells were medium-to-large in size with abundant eosinophilic cytoplasm and round vesicular nuclei, without significant nuclear atypia. Few mitotic figures were noted. The tumor cells were infiltrated and surrounded by a polymorphous inflammatory cell infiltrate composed of neutrophils, lym-phocytes, and plasma cells ( Fig. 2A). Lymph node metastasis of the tumor was also evident (Fig. 2B). Multiple, well-controlled immunohistochemical stains revealed membranous immunoreactivity of the tumor cells for CD31 ( Fig. 2C), nuclear positivity of the tumor cells for the vascular marker ERG (Fig. 2D), and focal immunoreactivity of the tumor cells for pan-cytokeratin (Fig. 2E). The postoperative course was smooth and the patient was discharged on the ninth day following the operation. Six weeks after the operation the patient had gained 8 kg, and did not complain of any recurring symptoms. The patient provided written informed consent for the publication of her clinical details and images. Discussion PMHE was classified by the World Health Organization as an intermediate-grade vascular tumor that rarely metastasizes. The largest series showed that the tumor typically involve the dermis and subcutaneous tissue, with a smaller number involving skeletal muscle and bone [1]. PMHE has a striking male predominance (82%). The mean age at presentation is 31 years, and 94% of patients present during their second to fifth decades of life [1]. This tumor is extremely rare; to the best of our knowledge and after a careful review of the English-language literature, this is the first case of well-documented PMHE of the esophagus. Furthermore, it is the third reported case describing lymph node involvement of this neoplasm [1,2]. Histologically, the tumor has a nodular architecture, with infiltration toward surrounding adipose or skeletal muscular tissues and an occasional desmoplastic reaction. The tumor cells are arranged in sheets or short fascicles within a background of variably prominent inflammatory infiltrates commonly composed of neutrophils or, less commonly, lymphocytes, plasma cells, or eosinophils. Some cases demonstrate a myxoid background [3]. PMHE seems to have a variable clinical course, with frequent local recurrence, but a small risk of distant metastasis [1]. These tumors are usually treated by surgical excision. Since metastasis is rare, local surgical control is the mainstay of recommended management. All the symptoms related to PMHE are due to local mass effects, as in our case. Since this neoplasm is slow-growing and considered to be of an intermediate grade [3], experts believe that conservative, symptomatic management may be warranted. The few studies investigating this issue demonstrated that some PMHE patients had many years of survival with non-surgical, conservative, and symptomatic treatment [1,[4][5][6][7]. However, PMHE is known to exhibit local recurrence with the possible development of new lesions; 58% of cases demonstrate local recurrence or the development of additional nodules in the same region, 94.5% of which occur within 1 year of the first presentation [1]. In the event of local recurrence, or the development of new lesions, surgical resection (re-excision) is recommended. In view of the high recurrence rate, regular follow-up is advisable, especially in the first year in which most cases of recurrence take place (94.5%). In addition, long-term follow-up is advisable due to the real, albeit very small, risk of distant metastasis occurring long after the initial diagnosis [1]. Metastatic lesions occur in rare cases [8]. Systemic chemotherapy has been suggested for this group of patients in an attempt to induce tumor shrinkage, and consequently alleviate the tumor mass effects. Unfortunately, substantial clinical trials have not been conducted due to the rarity of PMHE, and different types of systemic chemotherapeutic regimens have been described [8]. Monotherapy regimens included everolimus, sirolimus, telatinib, and gemcitabine. Combined regimens included sirolimus and zoledronic acid, gemcitabine and docetaxel, cisplatin and doxorubicin, cy-clophosphamide and prednisolone, and ifosfamide and doxorubicin. Multifocality, age at presentation, sex, and tumor size may be prognostic factors, but more studies are needed [3]. Generally speaking, long-term survival in affected patients is excellent [9]. Both cases of lymph node involvement previously reported in the literature showed inguinal lymph node and lower limb involvement. The first case involved a 49-year-old woman with a long course of disease and 2 local recurrences. The second recurrence included inguinal lymph node involvement 10 years after excision of the primary subcutaneous tumor. During the course of the disease, the patient underwent 3 surgical procedures (excision of the primary tumor, followed by 2 re-excision procedures for the 2 local recurrences). The second case was an 18-year-old male patient, presenting with PMHE involvement of the thigh, scrotum, penis, and inguinal lymph nodes, all of which were excised surgically. The patient remained disease-free.
2021-03-27T06:16:35.539Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "3cfb7462c33c875ee8ec9b198a382d038d298706", "oa_license": "CCBYNC", "oa_url": "https://www.jchestsurg.org/journal/download_pdf.php?doi=10.5090/jcs.20.151", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77a2c9caed4463c3f90ba7a2aa1ac0f42fdc8597", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
139511776
pes2o/s2orc
v3-fos-license
Investigation of charge dissipation in jet fuel in a dielectric fuel tank The electrostatic charge dissipation process in jet fuel in a polypropylene tank was investigated experimentally. Groundable metallic terminals were installed in the tank walls to accelerate the dissipation process. Several sensors and an electrometer with a current measuring range from 10-11 to 10-3 A were specifically designed to study the dissipation rates. It was demonstrated that thanks to the sensors and the electrometer one can obtain reliable measurements of the dissipation rate and look at how it is influenced by the number and locations of the terminals. Conductivity of jet fuel and effective conductivity of the tank walls were investigated in addition. The experimental data agree well with the numerical simulation results obtained using COMSOL software package. Introduction The electrostatic fuel charging effect is extensively described both in theoretical [1] and practical [2,3,4,5] terms. The relevance of dissipation studies is underpinned by broad use of polymers and composites in fuel tanks. This work focuses on experimental investigations of electrostatic charge dissipation in jet fuel in a tank made of a material which has much lower conductivity than fuel. In real environment, dissipation occurs as the charge escapes through the tank walls and through possible grounded points, such as the tank's external metallic elements exposed to environment. An experimental set-up was specifically designed to simulate the dissipation process in TS-1 jet fuel in a polypropylene tank by modeling the effects of possible grounded contacts on the dissipation rate. The principal dissipation rate evaluation method consisted in measuring the discharge current using probes immersed in fuel. For the method to be effectively used, we had to design and manufacture an electrometer capable of measuring currents as low as 10 -11 A. An extensive series of experiments was conducted which provided a good insight into the impact of the grounded terminals on the dissipation rate and the effects of the resistance of the fuel surface layers bordering on the probes on the dissipation rate measurement accuracy. The charge sign was proved to have virtually no effect on the dissipation rate. The dissipation process was simulated using COMSOL software package. Specific experiments were performed to measure the exact values of the fuel (TS-1) and tank wall conductivities required for the mathematical modeling. The datasheet conductivity of TS-1 with no electrostatic agent added is 4·10 -12 S/m, which was confirmed by the measurement taken by the specifically designed instrument. The datasheet polypropylene tank conductivity value is from 10 -15 to 10 -14 S/m, whereas the effective conductivity measured in the experiment has a much lower value: 2.6·10 -16 S/m. These data were used in the dissipation process simulation with the aid of COMSOL software package. The simulated dissipation rates correlate well with the experimental values. Experimental set-up The experimental set-up diagram is given in figure 1. Fuel was either pumped from Tank 3 to Tank 1 or flowed from Tank 2 to Tank 1 due to the difference in levels. The key element of the experimental set-up is a polypropylene tank with a volume V=0.48 m 3 (1200x800x500 mm) and a wall thickness of 10 mm (Tank 1). The tank has a polypropylene cover and the entire system is shielded by a grounded 3 mm thick steel sheet. The tank is fitted with eight groundable metallic contacts located on the walls and bottom and eight discharge current sensors. The cylinder-shaped current sensors are made of brass and have a diameter of 9 mm and a length of 10 mm. The sensors are connected to the discharge current meter (electrometer) by bronze conductors insulated from fuel and can be moved heightwise allowing the discharge current measurements to be taken at different distances from the tank bottom. The fuel column height was equal to 420 mm and the ullage was blown with nitrogen. The fuel pipeline downstream of the valve 3 is made of polypropylene except for a copper section where high voltage is supplied to the fuel from an external voltage source. Groundable terminals (K1, K2, ..., K8) are installed in the tank walls to control the dissipation rate. Charged fuel is supplied through the valve 9 into the tank containing discharge current sensors and charge-sensitive pre-amplifiers (CSP). The CSP output signal is transmitted to the analog-digital converter and saved by the PC. The discharge current measurement system is described in more detail below. Once the experiment was over and the discharge current dropped by over 10 times, the fuel was discharged into Tank 3. The duration of the experiments varied from 1.5 to 25 hours depending on the number of grounded contacts. Besides, measurements were taken to register fuel temperature, nitrogen pressure in the ullage, fuel flow rate during tank filling, sensor depth and overall fuel level in the tank. A total of 11 series comprising 74 experiments were performed. Discharge current measurement procedure The charge-sensitive pre-amplifier converts the charge picked by the probe into a potential signal registered by the set-up's measurement system. The current running from the probe through the preamplifier should be much lower compared to typical currents in the system under study. In the preamplifier designed for the experiments and illustrated in figure 2, the measurement current was determined by the input bypass RC (1 GOhm) and leakage current in the instrumentation amplifier INA116 (3·10 -15 A). The amplifier's input circuits are provided with filters (RFCF) and gas dischargers RA to ensure protection against high voltage. Designed as a 9-channel instrument, the amplifier has sensitive input circuits attached to polycarbonate dielectric stands by point-to-point wiring. Each channel has a gain selector (1, 10, 100 and 1000). Each of the pre-amplifier's channels was calibrated using a specifically designed testbed. The calibration was performed in three input current ranges: 10 -11 A, 10 -10 A, and 10 -9 A. Reciprocal influence of the different pre-amplifier channels was investigated in the course of calibration. The calibration curves demonstrated good linearity (0.1%) in all the input current and gain ranges. 2.3. Charge dissipation rate: experimental investigation results and their processing Obtained as a result of the measurements was the dependency of the charge picked by the sensor from the time elapsed after the tank had been filled with fuel charged by a high-voltage source. The experimental data were processed by a dedicated software using ROOT software package [6]. A typical view of the dependency for one of the experiments is shown in figure 3. ( 1 ) Here ρ(r, t) is charge density, ρ ( , 0) is the charge density at the initial time point, ε is the relative dielectric permittivity of the material, =8.8542·10 -12 C 2 /(N·m 2 ) is the absolute dielectric permittivity of vacuum, σ is the specific electric conductivity of the medium, t is time, and τ is the charge dissipation time. In the experiment, the charge dissipates in a big although not infinite volume limited by the tank walls with electric conductivity σwall from 10 -15 to 10 -14 S/m. Therefore, the dissipation process was described using the same type of dependency (1) whereas the dissipation time was found by approximating the experimental data. Figure 3 depicts the results of the approximation which yielded a dissipation time of 17175.9±0.9 s in the absence of grounded terminals and with only one measurement probe used. The probe has little effect on the dissipation rate for the walls with conductivity from 10 -15 to 10 -14 S/m due to high resistance of the fuel layer bordering on the small-sized probe. In the case of a ∅9 mm probe, the fuel resistance grows fast reaching 3.4 TOhm, which is comparable to the resistance of the walls with conductivity of 10 -15 S/m. The experiments with one, two and eight probes enabled evaluating the effective electric conductivity of the tank's polypropylene walls: σeff=2.637·10 -16 S/m is the level at which the probe has a noticeable effect on the dissipation rate. However, even such a strong effect can be neutralized by selecting a smaller probe. The fuel layer around the probe of a length of 5 mm and the diameter of 2.5 mm has resistance of 6.4 TOhm. Besides, if grounded terminals are connected, the dissipation rate is governed by their low resistance. Thus, the small probes are quite suitable for measuring dissipation rates in the experiment described here. As suggested by figure 4a, increasing the number of terminals can significantly reduce the dissipation time. Moreover, measurements taken at different levels do not differ by more than 12%. Obviously, the charge is distributed over the fuel volume in a fairly uniform manner provided that σfuel>>σwall. In the absence of grounded terminals, the dissipation rates obtained from measurements with different probes differ significantly (figure 4b), because the charge dissipates both through the tank walls and the probes. With three or more grounded terminals, measurements taken with probes of different sizes yield virtually the same results. Numerical simulation of the charge dissipation process in TS-1 in a polypropylene tank The electrostatic charge dissipation process in TS-1 was simulated using COMSOL software package. The simulation model was designed so as to describe the experiment conditions as precisely as possible. The simulation model and the experiments alike used 8 cylindrical terminals placed on the tank walls and connectable to zero potential. Eight current probes were immersed in fuel. They could be connected to zero potential through the designated resistance. The tank's outer surface was grounded. The simulation model contained 48 charge volume density and potential measurement points corresponding to eight measurement probes placed at six different heights in the experiment. The problem was solved in two steps. A simulation began with an electrostatic case with uniform charge volume density in fuel. The initial charge volume density was calculated based on the measurements of the current I taken during fuel charging, tank filling time τ and fuel volume V: ρ = = . .! = 4.8 10 "# C/m 3 . The resulting potential distribution was used as the initial condition for solving the non-stationary dissipation case. The boundary conditions were the same as for the stationary case addressed at the first step: the tank's outer surfaces were grounded and the grounded terminals had the required configuration. Besides, the current probes' resistance was specified for the non-stationary case. Similarly to the experimental data, the numerical simulation results were saved to a text file and processed by a dedicated software using ROOT package [6]. Figure 5 compares the simulated dissipation rate to that obtained in the experimental series 2-5 with varied probe depths and external high voltage sign and an increase in the number of grounded terminals. The experimental points obtained in different experimental series are slightly shifted along the abscissa axis for the different series not to overlap. As indicated by figure 5, the differences in dissipation rates for all the series concerned lie within the same range (from 5 to 12%) as in the second series. Thus the comparison shows that the experimental and simulated results differ by 12% maximum at room temperature. Smaller sensors display a bigger difference which is likely to be caused by the liquid moving in the experiment and unmoving in the simulation. It was noted in the experiments that the change in the fuel charge sign has little effect on the dissipation time. The location of a grounded terminal on the tank wall was demonstrated to have almost no effect on the dissipation rate. Conclusion Influence of the grounded metallic terminals on the charge dissipation process was investigated experimentally. Conductivity of jet fuel and effective conductivity of the tank walls were investigated in addition. It was shown that charge dissipation rate could be reliably measured using the chargesensitive pre-amplifier. The experiments show that charge dissipation time exponentially decrease with increasing of the number of grounded terminals. The experimental data agree well with the numerical simulation results obtained using COMSOL software package.
2019-04-30T13:03:48.478Z
2017-09-01T00:00:00.000
{ "year": 2017, "sha1": "1f64e34c851f4b8ea09f6cae0b922975c2c8e640", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/899/8/082002", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "fb64e9a7499d5c4bc3d3a12f75aa1af944684126", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
229461270
pes2o/s2orc
v3-fos-license
A Hot Blob Eastward of New Zealand in December 2019 : A hot blob for near-surface water was identified eastward of New Zealand in the South Pacific in December 2019, which was the second strongest event on record in this region. Its sea surface temperature anomalies reached up to 5 ◦ C, and the anomalous warming penetrated around 40 m deep vertically. From the atmospheric perspective, the anomalous high-pressure system from the surface up to 300 hPa lasted for about 50 days, accompanied by the blocking pattern at 500 hPa and a deep warming air column extending downward to the surface. A mixed-layer heat budget analysis revealed that the surface heat flux term was the primary factor contributing to the development of this hot blob, with more shortwave radiation due to the persistent high-pressure system and lack of clouds as well as higher temperature of the troposphere aloft denoted by sensible heat. The oceanic contribution including the horizontal advection and vertical entrainment was changeable and accounted for less than 50%. Moreover, we used the strongest hot blob event which peaked in December 2001 as another example to evaluate the robustness of results derived from the 2019 case. The results show similar circulation features and driving factors, which indicate the robustness of the above characteristics. Introduction Regional summer heatwaves have occurred more frequently over the globe since 2000 [1][2][3][4], which has attracted wide attention among researchers. Moreover, marine heatwave days have increased 54% globally since the early 20th century, according to the definition of Hobday et al. [5], in particular with an increase of 3-9 days per decade in the New Zealand region [6]. Recently in December 2019, a hot blob (also termed "marine heatwave") appeared and spanned at least a million square kilometers-an area nearly four times larger than New Zealand-in the South Pacific (Figure 1a). This remarkable spike in sea surface temperature (SST) reached up to 5 • C above average across a massive patch and was located to the east of New Zealand, near the sparsely populated Chatham Islands archipelago. In addition, it was the biggest patch of anomalous warming over global oceans at that time. As the news reported, this New Zealand "marine heatwave" brought tropical fish from 3000 km away [7]. This surge in ocean heat over a short period could have been difficult for local marine life if it had penetrated far beyond the surface. This record-breaking hot blob followed a marine heatwave two summers ago that propelled the hottest summer of New Zealand on record, more than 3 • C above average SST, and led to tropical fish from Australia being found along the country's coast [8]. Furthermore, Salinger et al. [9] analyzed three unparalleled coupled ocean-atmosphere heatwaves around the New Zealand region during the austral summers of 1934/1935, 2017/2018 and 2018/2019. Heatwaves are natural hazards that have substantial impacts on human health, the economy and the environment in both the atmosphere and ocean [10]. For example, the 2011 marine heatwave in Western Australia exerted great effects on marine biodiversity [11,12] while the ones near New Zealand also caused the rapid loss of snow and glacier ice and largely affected the agriculture [8,9]. In the Northern Hemisphere, the record-breaking warm and persistent water mass in the Northeastern Pacific during 2013-2015, termed the "Pacific warm blob" [13] and "marine heatwave" [14,15], also had catastrophic influences on the climate, coastal ecosystems and fisheries [16,17], which led to the anomalously warm winter in Alaska [18] but colder winter in wide regions of North America [19], the migration of marine species [20,21], a large amount of dead species [20,22], and the shutdown of local fisheries [23], as well as possibly the severe drought in California [24]. However, the underlying physical mechanisms of marine heatwaves are less well understood. Several reasons are documented to be the potential drivers: El Niño-Southern Oscillation (ENSO) [11,14,25], mesoscale eddies [26], oceanic heat advection/transport [27,28], high atmospheric temperatures and low local winds [13,[29][30][31]. However, research into marine heatwaves is still in its infancy, with little consensus about the driving mechanisms [10]. In short, many issues remain unresolved about the marine heatwaves in the South Pacific. Given that there have been more and more local events in recent years [11] and their associated disasters, considerable efforts are required to illuminate their driving mechanisms and generate future projections of marine heatwaves [9]. In this study, we took the recent hot blob eastward of New Zealand in December 2019 as an example and analyzed its synoptic characteristics and heat budgets, aiming to quantify the contributions from atmosphere and ocean. Moreover, we used another hot blob event in December 2001, the strongest event east of New Zealand on record, as another example to have a comparison and further prove the robustness of circulation anomalies and the leading driving factor derived from the recent December 2019 case. Data and Methods We employed both daily and monthly data to illustrate the hot blob events in this study. The atmospheric data were obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis 1 [32][33][34]. They had a horizontal resolution of 2.5 • × 2.5 • in latitude and longitude, with 17 standard pressure levels vertically. The climatology is the mean state computed by averaging the 30-year data from 1990 to 2019. We used the departure from the climatology as anomalies according to Shi and Qian [35]. The variables including the air temperature, sea level pressure (SLP), geopotential height, surface winds, and heat fluxes in this study. For the SST data, we utilized the monthly Extended Reconstructed Sea Surface Temperature (ERSST) V5 data in a 2 • × 2 • horizontal resolution from the National Oceanic and Atmospheric Administration (NOAA) [36,37]. The monthly SST climatology was constructed by averaging the SST from 1950 to 2019. The daily SST dataset was retrieved from the Optimum Interpolation (OISST) with a 0.25 • × 0.25 • horizontal resolution [38]. The subsurface temperature and current velocity data were obtained from the NCEP Global Ocean Data Assimilation System (GODAS) [39,40]. We also used Argo subsurface temperature for comparison [41]. We further calculated the anomaly of specific variable as the difference between the total value and the corresponding climatology. The focused blob area covered (34 • S-52 • S, 150 • W-178 • W) in the South Pacific. For the definition of marine heatwaves, Hobday et al. [5] proposed a comprehensive one considering an anomalously warm event to be a marine heatwave if it lasts for five or more days, with temperatures warmer than the 90th percentile based on a 30-year historical baseline period. However, we did not apply this definition because we do not focus on the detailed daily evolution of the hot blob cases in December 2001 and 2019. Instead, we give a relatively wide picture of the blobs especially in the mixed-layer heat budget analysis, which is illustrated using the monthly data. To quantify the relative contribution of at mospheric and oceanic processes in above hot blob events, we employed the mixed-layer heat budget analysis in November and December of 2019 according to Cronin et al. [42] and Schmeisser et al. [43]. As shown in Equation (1) below, where T is the SST anomaly (SSTA) over the blob area derived from ERSST V5 and the first term indicates the tendency of SST. Q 0 is the net heat flux at the ocean-atmosphere interface, which is the sum of the net shortwave radiation flux (SW), net longwave radiation flux (LW), net latent heat flux (LH), and sensible heat net flux (SH). The heat fluxes were regridded from NCEP-NCAR Reanalysis 1 to a 1 • × 1 • horizontal resolution using linear interpolation. All the heat flux terms were converted into positive downwards to indicate the favorable condition for blob warming. The ρC p describes the heat capacity of water (ρC p = 4.088 × 10 6 J • · C −1 · m −3 ) and h is the long-term mean mixed layer depth via the potential temperature in a 1 • × 1 • horizontal resolution from the NODC (Levitus) World Ocean Atlas 1994 [44,45]. The second and third terms on the right-hand side of equation (1) represent the oceanic horizontal advection in the mixed layer and vertical entrainment at the bottom of the mixed layer, respectively. It should be noted that we did not explicitly calculate the vertical entrainment in this study in part due to the lack of ocean subsurface data and we obtained this term as the residual by removing the heat flux and advection terms from the SST tendency term. More details can be found in Cronin et al. [42] and Schmeisser et al. [43]. In addition, the least-squares linear regression method is utilized to estimate the linear trends of regional blob variation. Features of the Hot Blob Eastward of New Zealand in December 2019 Firstly, we inspected the annual cycle of the SST (Figure 1b) in the blob area of the South Pacific (black box in Figure 1a). In December, the climatological SST is around 14 • C, followed by the hottest months of January, February, and March. Due to the anomalous warming in 2019, the local SST could reached 20 • C in December. Moreover, the SST standard deviation (SD) in this region is large in December, indicating a relatively larger natural variability at this time. Furthermore, the historical context in this blob area was given by the monthly SSTA from January 1950 to December 2019 ( Figure 1c). Within this area, an extreme hot blob event was presented in December 2019 with SSTA about 2.8 times the SD warmer than the climatological SST (also see Figure 1d). Clearly, it is common to see patches of warmer water off New Zealand, but this magnitude is the second largest on record. The event that occurring in December 2001 was the strongest case during our study period ( Figure 1c). These two events were prominent in magnitude compared to the SSTA at other times. It should be noted that the above two events both occurred in December, implying the potential phase-locking feature of hot blobs in this area during austral summer. On the other hand, there is a prominent increasing trend of SSTs in this blob region (Figure 1c), partly contributing to the extreme magnitude of these two blob events. In addition, this significant warming trend is very consistent with the global warming tendency. To depict the duration of the two extreme hot blobs, we calculated the normalized monthly SSTA averaged over the blob area ( Figure 1d). Both of them are long-lived events in a monthly-based context if we take 1.0 as the threshold. The duration of the 2001 event is 11 months, while it is 10 months for the recent 2019 event (pink shading in Figure 1d). [46]). Therefore, the similarity and difference of the hot blobs (or marine heatwaves) in two hemispheres should be further compared and illustrated. Climatologically, westerlies prevail over the entire blob region, which is located to the southwest of the subtropical high (Supplementary Materials Figure S1). The subtropical high gradually weakens from November to December, associated with the weakening of westerlies in the blob area. Figure 3 illustrates the evolution of SLP and surface wind anomalies as well as total geopotential heights at 500 hPa every 10 days from 1 November to 31 December of 2019. The surface high system persisted from mid-November to late December ( Figure 3). The easterly anomalies weakened the climatological westerlies (Figure 3c-e) and thus reduced the heat loss of the ocean via similar processes involved in a wind-evaporation-SST (WES) mechanism [47], favorable for the warming of local SST. From the oceanic perspective, the easterly anomalies transported the warmer water from tropics via Ekman transport (Figure 3c-e), which also contributed to the positive SSTA of the blob region. Furthermore, there was less rainfall and dry weather under the continuous anticyclonic condition, causing more shortwave radiation to come to and heat the ocean surface. We notice that although the anomalous surface high was long-lasting, the wind anomalies within the blob area were a bit more variable (Figure 3). In terms of the geopotential heights at 500 hPa, a prominent ridge was stable with a blocking pattern during the middle December of 2019 (Figure 3e). The importance of the blocking characteristic during a marine heatwave has also been documented in Salinger et al. [8,9]. Climatologically, westerlies prevail over the entire blob region, which is located to the southwest of the subtropical high (Supplementary Materials Figure S1). The subtropical high gradually weakens from November to December, associated with the weakening of westerlies in the blob area. Figure 3 illustrates the evolution of SLP and surface wind anomalies as well as total geopotential heights at 500 hPa every 10 days from 1 November to 31 December of 2019. The surface high system persisted from mid-November to late December ( Figure 3). The easterly anomalies weakened the climatological westerlies (Figure 3c-e) and thus reduced the heat loss of the ocean via similar processes involved in a wind-evaporation-SST (WES) mechanism [47], favorable for the warming of local SST. From the oceanic perspective, the easterly anomalies transported the warmer water from tropics via Ekman transport (Figure 3c-e), which also contributed to the positive SSTA of the blob region. Furthermore, there was less rainfall and dry weather under the continuous anticyclonic condition, causing more shortwave radiation to come to and heat the ocean surface. We notice that although the anomalous surface high was long-lasting, the wind anomalies within the blob area were a bit more variable (Figure 3). In terms of the geopotential heights at 500 hPa, a prominent ridge was stable with a blocking pattern during the middle December of 2019 (Figure 3e). The importance of the blocking characteristic during a marine heatwave has also been documented in Salinger et al. [8,9]. To further reveal the atmospheric characteristics during November and December of 2019, we ploted the evolution of the temperature and geopotential height anomalies in the vertical direction averaged over the blob region (Figure 4a). Similar illustrations are widely used in investigating extreme warm events in the atmosphere [35,48]. The anomalous warming was strongest at the middle-lower troposphere, with a deep-warm air column extending upward to 300 hPa and downward to the surface. This persistent warming in the troposphere benefited the outstanding SST warming in the blob area in December 2019 through air-sea interaction (Figure 4a). The surface highpressure system in Figure 3 corresponded to the long-lasting anomalous high circulation centered at around 300 hPa (Figure 4a). The anomalous high circulation and warming lasted for around 50 days and formed a prominent intraseasonal signal from the atmospheric perspective. We conclude that the atmospheric height and temperature configuration could stimulate the positive SSTA in the blob area. To further reveal the atmospheric characteristics during November and December of 2019, we ploted the evolution of the temperature and geopotential height anomalies in the vertical direction averaged over the blob region (Figure 4a). Similar illustrations are widely used in investigating extreme warm events in the atmosphere [35,48]. The anomalous warming was strongest at the middle-lower troposphere, with a deep-warm air column extending upward to 300 hPa and downward to the surface. This persistent warming in the troposphere benefited the outstanding SST warming in the blob area in December 2019 through air-sea interaction (Figure 4a). The surface high-pressure system in Figure 3 corresponded to the long-lasting anomalous high circulation centered at around 300 hPa (Figure 4a). The anomalous high circulation and warming lasted for around 50 days and formed a prominent intraseasonal signal from the atmospheric perspective. We conclude that the atmospheric height and temperature configuration could stimulate the positive SSTA in the blob area. Mixed-layer Heat Budget of the Hot Blob Event in November and December 2019 In November, the SST of the blob area was warmed primarily due to the positive heat flux from the atmosphere, with most of the contributions from the SW and SH ( Figure 5). The contribution of horizontal advection in the mixed layer was positive with a generally equivalent value compared with the temperature tendency. However, the vertical entrainment and diffusion processes played a negative role in warming the blob region at this stage, which largely offset the positive contribution from the heat fluxes at the ocean-atmosphere interface. Then during December, there was a much larger warming tendency for the blob SST ( Figure 5), associated with the quick development and occurrence of the hot blob in Figure 1a. Quantitatively, the atmosphere and ocean played a generally equal role in this blob event ( Figure 5). More SW penetrated the ocean surface and therefore heated the blob area due to the deep high-pressure system (Figure 4a) and lack of clouds. The positive SH anomaly contributed nearly 50% of the total heat flux term, which resulted from the direct heating of warmer tropospheric column aloft (Figure 4a). We notice that the influence of oceanic processes reversed from negative in November to positive in December, possibly implying the seasonal Mixed-Layer Heat Budget of the Hot Blob Event in November and December 2019 In November, the SST of the blob area was warmed primarily due to the positive heat flux from the atmosphere, with most of the contributions from the SW and SH ( Figure 5). The contribution of horizontal advection in the mixed layer was positive with a generally equivalent value compared with the temperature tendency. However, the vertical entrainment and diffusion processes played a negative role in warming the blob region at this stage, which largely offset the positive contribution from the heat fluxes at the ocean-atmosphere interface. Then during December, there was a much larger warming tendency for the blob SST ( Figure 5), associated with the quick development and occurrence of the hot blob in Figure 1a. Quantitatively, the atmosphere and ocean played a generally equal role in this blob event ( Figure 5). More SW penetrated the ocean surface and therefore heated the blob area due to the deep high-pressure system (Figure 4a) and lack of clouds. The positive SH anomaly contributed nearly 50% of the total heat flux term, which resulted from the direct heating of warmer Atmosphere 2020, 11, 1267 8 of 13 tropospheric column aloft (Figure 4a). We notice that the influence of oceanic processes reversed from negative in November to positive in December, possibly implying the seasonal variability of the upper ocean or a change in its internal thermodynamic properties, which needs further investigation. Atmosphere 2020, 11, x FOR PEER REVIEW 8 of 13 variability of the upper ocean or a change in its internal thermodynamic properties, which needs further investigation. Figure 5. The heat budget (in °C/mon) of blob area in November and December of 2019. The "dT/dt" term is the tendency of SST (green bar), while the "heat flux" term (pink bar) is from atmosphere, "horizontal advection" term (blue bar), and "vertical entrainment and diffusion" term (brown bar) are calculated in the ocean mixed layer. In addition, the specific SW, LW, LH, SH terms are calculated and represented within the red-dashed box. All the heat flux terms are defined as positive downwards. Comparison of the Historical Strongest Hot Blob in December 2001 To evaluate the robustness of the above results, we further analyzed the strongest hot blob event that occurred in December 2001 (Figure 1c). In terms of the SLP and surface wind anomalies, the favorable anti-clockwise circulation anomalies were persistent from early November to middle-late December of 2001 ( Figure 6). The easterly anomalies to the north of the above surface high system counteracted the climatological westerlies, reduced the evaporation and latent heat loss of the ocean, and heated the ocean surface in the blob area, which was similar to the WES mechanism. In the Southern Hemisphere, the easterly wind anomalies could advect the warmer water from lower latitudes to the blob area via Ekman transport, which also contributed to the positive SSTA of the blob region. Importantly, the blocking feature at 500 hPa was more evident (Figure 6e) compared to that in 2019 (Figure 3e), in accordance with their intensity difference. From the vertical section averaged over the blob area, we observed a significant warming in the troposphere aloft (Figure 4b), which warmed the oceanic surface through the sensible heat fluxes (Figure 7). This heating effect was especially intensive during the middle of December, with a warm core greater than 5 °C centered at around 400-500 hPa (Figure 4b). Correspondingly, the magnitude of the positive height anomalies was much greater compared to that in December 2019 (Figure 4a). The above anomalous height and temperature configuration satisfied the hydrostatic balance according to previous studies [35,48]. The "dT/dt" term is the tendency of SST (green bar), while the "heat flux" term (pink bar) is from atmosphere, "horizontal advection" term (blue bar), and "vertical entrainment and diffusion" term (brown bar) are calculated in the ocean mixed layer. In addition, the specific SW, LW, LH, SH terms are calculated and represented within the red-dashed box. All the heat flux terms are defined as positive downwards. Comparison of the Historical Strongest Hot Blob in December 2001 To evaluate the robustness of the above results, we further analyzed the strongest hot blob event that occurred in December 2001 (Figure 1c). In terms of the SLP and surface wind anomalies, the favorable anti-clockwise circulation anomalies were persistent from early November to middle-late December of 2001 ( Figure 6). The easterly anomalies to the north of the above surface high system counteracted the climatological westerlies, reduced the evaporation and latent heat loss of the ocean, and heated the ocean surface in the blob area, which was similar to the WES mechanism. In the Southern Hemisphere, the easterly wind anomalies could advect the warmer water from lower latitudes to the blob area via Ekman transport, which also contributed to the positive SSTA of the blob region. Importantly, the blocking feature at 500 hPa was more evident (Figure 6e) compared to that in 2019 (Figure 3e), in accordance with their intensity difference. From the vertical section averaged over the blob area, we observed a significant warming in the troposphere aloft (Figure 4b), which warmed the oceanic surface through the sensible heat fluxes (Figure 7). This heating effect was especially intensive during the middle of December, with a warm core greater than 5 • C centered at around 400-500 hPa (Figure 4b). Correspondingly, the magnitude of the positive height anomalies was much greater compared to that in December 2019 (Figure 4a). The above anomalous height and temperature configuration satisfied the hydrostatic balance according to previous studies [35,48]. Then, we conducted similar mixed-layer heat budget analysis for this hot blob event (Figure 7). In November 2001, the surface heat fluxes and oceanic vertical processes were mostly responsible for the early warming of this hot blob. Then in December, the heat fluxes became much larger, with a dominant positive SH anomaly indicating the heating from the atmosphere (Figure 7). Moreover, the LH anomalies also became more prominent and favorable for the intensification of this hot blob, Atmosphere 2020, 11, 1267 9 of 13 which was related to the easterly anomalies and the resultant lower evaporation and LH loss. Similar to the December 2019 case (Figure 5), the vertical entrainment and diffusion term also changed sign during these two months, which implied a possible complexity of oceanic processes during austral summer. The horizontal advection played a positive role in both November and December but with a limited magnitude (Figure 7). Then, we conducted similar mixed-layer heat budget analysis for this hot blob event (Figure 7). In November 2001, the surface heat fluxes and oceanic vertical processes were mostly responsible for the early warming of this hot blob. Then in December, the heat fluxes became much larger, with a dominant positive SH anomaly indicating the heating from the atmosphere (Figure 7). Moreover, the LH anomalies also became more prominent and favorable for the intensification of this hot blob, which was related to the easterly anomalies and the resultant lower evaporation and LH loss. Similar to the December 2019 case (Figure 5), the vertical entrainment and diffusion term also changed sign during these two months, which implied a possible complexity of oceanic processes during austral summer. The horizontal advection played a positive role in both November and December but with a limited magnitude (Figure 7). Discussion A hot blob with the SSTA reaching up to 5 °C on specific days occurred eastward of New Zealand in the South Pacific in December 2019 and was reported to be the biggest warming patch over global oceans at that time. Moreover, this blob was shown to be the second largest event on record in this region. The sudden warming of this hot blob was primarily contributed by the atmosphere, with positive heating due to the higher temperature of the troposphere aloft denoted by sensible heat as well as more shortwave radiation due to the persistent high-pressure system and, potentially, the lack of clouds. From the oceanic perspective, the vertical entrainment and diffusion processes exhibited a relatively changeable nature, while the horizontal advection promoted the development of the hot blob but with a minor importance. Similar investigations were also performed on the strongest case in December 2001 and we got similar results in terms of both the circulation characteristics and attribution derived from the mixed-layer heat budget analysis. However, we could not accurately calculate the effects of vertical entrainment and diffusion due to the short coverage and lack of real 3-D oceanic motion of the oceanic observations, which necessitates further exploration in the future. It is noted that the two strongest hot blob events east of the New Zealand region both peaked in austral summer (December 2001 and 2019). Based on more hot-blob samples in this region and better data in the future, we should explore whether these hot blob events have a phase-locking characteristic. Furthermore, the potential linkage between these regional hot blobs and the ENSO in the tropical Pacific should also be investigated, although the studies by Salinger et al. [7,8] have some crucial implications. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1. Figure S1 Discussion A hot blob with the SSTA reaching up to 5 • C on specific days occurred eastward of New Zealand in the South Pacific in December 2019 and was reported to be the biggest warming patch over global oceans at that time. Moreover, this blob was shown to be the second largest event on record in this region. The sudden warming of this hot blob was primarily contributed by the atmosphere, with positive heating due to the higher temperature of the troposphere aloft denoted by sensible heat as well as more shortwave radiation due to the persistent high-pressure system and, potentially, the lack of clouds. From the oceanic perspective, the vertical entrainment and diffusion processes exhibited a relatively changeable nature, while the horizontal advection promoted the development of the hot blob but with a minor importance. Similar investigations were also performed on the strongest case in December 2001 and we got similar results in terms of both the circulation characteristics and attribution derived from the mixed-layer heat budget analysis. However, we could not accurately calculate the effects of vertical entrainment and diffusion due to the short coverage and lack of real 3-D oceanic motion of the oceanic observations, which necessitates further exploration in the future. It is noted that the two strongest hot blob events east of the New Zealand region both peaked in austral summer (December 2001 and 2019). Based on more hot-blob samples in this region and better data in the future, we should explore whether these hot blob events have a phase-locking characteristic. Furthermore, the potential linkage between these regional hot blobs and the ENSO in the tropical Pacific should also be investigated, although the studies by Salinger et al. [7,8] have some crucial implications.
2020-11-26T09:07:09.957Z
2020-11-24T00:00:00.000
{ "year": 2020, "sha1": "4288ef1a8a36f92f584f134b58a140b19bcc3da9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4433/11/12/1267/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "fd72e8f3f32f94aa3d2df388a80d480594f72e22", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
115558227
pes2o/s2orc
v3-fos-license
3pi photo-production with CLAS The $3\pi$ system produced in the reaction $\gamma p \to \pi^{+}\pi^{+}\pi^{-} n$ at $4.8-5.4$ GeV is investigated from the E01-017 (g6c) running of CLAS. This energy range allows for the study of excited mesons in the 1-2 GeV mass range, in their decay to $3\pi$, proceeding through $\rho \pi$ and $f_2 \pi$ emissions. At these energies, there is significant overlap in phase space for events with $\rho$ and $f_2$ production, recoiling off an excited baryon, such as the $\Delta(1232)$, $N^*(1520)$ and $N^*(1680)$. We show that after few kinematic selections, events of the latter type are suppressed in the final data set, allowing us to perform a PWA on the $3\pi$ system. Introduction Due to the self interacting nature of gluons, QCD allows for hybrid states with a (qqg n ) configuration, where the gluon excitation gives rise to a spectrum of additional states outside the constituent quark model. One of the signature hadronic states is a meson with J P C = 1 −+ quantum numbers which can not be attained by regular (qq) mesons. The inherent exotic quantum numbers prevent the mixing of this state with the conventional mesons with a (qq) state configuration, thus simplifying the identification of such a state. There are several reasons behind selecting the charged 3π system for a Partial Wave Analysis (PWA). The first reason is the simplicity of the final state, with only few decay channels open in the decay of the 3π system. The limited acceptance of the CEBAF Large Acceptance Spectrometer (CLAS) for forward going particles associated with excited meson production poses a drawback for any meson spectroscopy experiment with CLAS in its current configuration; however, there is reasonable acceptance for detecting up to 3 charged particles in CLAS. Secondly, even though the dominant decay mode of this state is predicted to be through an S-and a P -wave meson emission, such as b 1 (1235)π and f 1 (1285)π, the ρπ decay channel is non negligible 6 . The evidence for the exotic π(1600) state in the π − p → π + π − π − p reaction at 18 GeV, by the Brookhaven E852 experiment 1,2 , provided yet another motivation to search for the state in the charged 3π system at JLab. The production of the state was shown to be dominated by a natural parity (most likely a ρ) exchange. In the framework of the Vector Dominance Model (VDM), with the photon beam turning into a vector meson, i.e. ρ, ω, φ, by reversing the role of the beam and the exchange particle (Ex), this state should also be produced with a photon beam in the pion exchange channel. The are various discussions in the literature as to why photo-production may be a better production mechanism for exotic mesons 3,4,5 . Photon beams, as probes for exotic meson production, have not been fully explored so far and the existing data on multi-particle final states are very sparse. This experiment, with more statistics, should provide some guidance. There are three data sets of relevance to our analysis in photoproduction: the SLAC 1-m. hydrogen bubble chamber experiment 7 , using a backscattered laser photon beam of 19.5 GeV average energy from an incident electron beam of 30 GeV; the CERN hydrogen experiment 8 , utilizing a tagged bremsstrahlung photon beam in the 25-70 GeV energy range; and the SLAC 40-in. hydrogen bubble chamber experiment, using 4.3 and 5.25 GeV photon beams, produced by the 8.5 and 10 GeV positron annihilations in an LH 2 target 9 . These experiments lacked the statistics required for a full PWA. In Ref. 7, the analysis of the π + π + π − events in the reaction γp → π + π + π − n showed that the 3π spectrum in the low mass region is dominated by a 2 (1320) production with no clear evidence for a 1 (1260). In the high mass region, based on the angular distribution analysis of the 3π events, the group claimed evidence for a narrow state at 1.775 GeV with possible J P C = 1 −+ , 2 −+ , or 3 ++ quantum number assignments. From the analysis of the 4π events in γp → π + π − π + π − p, Ref. 8, reported two peaks, one at the mass of the a 2 (1320) and another at around 1.75 GeV, in the ρπ state recoiling off the remaining pion. From the forward peaked nature of the 3π system, the production mechanism was attributed to the Deck-effect 10 . No angular distribution analysis was performed on the data. The authors of Ref. 9 also reported two peaks in the ρπ spectrum from the analysis of the 3π data in the γp → π + π + π − n reaction at the mass of the a 2 (1320) and one in the ∼ 1.7−1.85 GeV region. The production of the a 2 was shown to be consistent with a one-pion-exchange (OPE) mechanism. In the case of charged 3π photo-production, the reaction is a chargeexchange process, and as such, neither Pomeron, nor ω exchanges are possible. Considering G-parity conservation a , pion as well as a 1 or a 2 are possible exchanges from the ρ content of the photon beam, while for the ω part of the beam the most likely exchange is the ρ. It is noteworthy that any contribution to Deck-effect enhancements 10 in either the ρ 0 π + or the f 2 π + systems must come from the ω and φ components of the photon beam, due to G-parity conservation considerations. Experimental Setup and Running Conditions The data for this analysis were collected during Aug.-Sep. of 2001. The primary beam of 5.7 GeV electrons at 100% duty factor was provided by the Continuous Electron Beam Accelerator Facility (CEBAF). The secondary beam of photons is produced in Hall B via bremsstrahlung radiation, using a radiator of 3 × 10 −4 radiation lenght. The photon beam energy is determined by a tagging system which measures the momentum of the scattered electrons 11 . The tagger is capable of identifying photons in the 20% − 95% range of the incident electron beam energy. An 18 cm long cell filled with LH 2 was used as the proton target. The Hall B houses the CLAS detector. CLAS covers a large solid angle with polar angle detection in the range 8 • ≤ θ ≤ 145 • , and azimuthal angle coverage of 80%. The detector, composed of six independent sectors, provides a toroidal magnetic field, where in normal settings, positively(negatively) charged particles bend outward(inward). The three sets of drift chambers embedded in the space between the magnet coils in radial direction provide charged particle detection and track reconstruction. A set of time of flight scintillators (TOF) are used for charged particle identification, and a set of electromagnetic calorimeters (EC) are used for neutral particle detection. Further details of the CLAS detector design and performance are described elsewhere 12 . To enhance the yield for the π + π + π − channel, the running conditions for g6c were modified from the conventional photon beam runs at CLAS. 4 To increase the photon flux, the experiment ran with a higher electron beam intensity (with ∼50% of the data collected at 40 nA and ∼50% at 50 nA). These electron beam intensities correspond to a photon beam flux of ∼1.17 × 10 8 γ/sec and ∼1.5 × 10 8 γ/sec in the entire tagging range, and ∼8.8 × 10 6 γ/sec and ∼1.1 × 10 7 γ/sec in the top 15% of the photon beam energy (4.8-5.4 GeV). To increase the acceptance for the negatively charged (inbending) particles, the target was pulled back by 100 cm from the center of CLAS and the torus magnetic field was set to its half maximum value, corresponding to the torus current I = 1938 A. The level I trigger for the experiment was composed of a coincidence between a signal in the first 12 tagger elements, a signal in 2 of the 3 Start Counter (ST) elements (an assembly of scintillators in three segments) and two charged particles in the TOF. The level II trigger required two tracks in the drift chambers in any two sectors of CLAS. Event Selection In the π + π + π − (n) final state, the three pions were detected in CLAS and the neutron was reconstructed by missing mass. Events which did not satisfy charge conservation in the reaction were rejected at the early stages of the analysis. In addition, only events with two identified π + , one π − , and no more than two detected neutral particles were selected. Vertex position cuts were applied to ensure the events originated within the target volume and a vertex timing constraint was imposed to reduce the accidental coincidences between the CLAS and the tagging system. The interaction of interest in our analysis is the 3π final state produced in t−channel exchange process, as shown in Fig. 1. With the maximum available photon beam energy of 5.4 GeV, there is a non-negligible contribution from t−channel baryon resonance production from the two processes shown in Fig. 2. Of the two, the left process with the either the ρ or the f 2 recoiling off a ∆(1232)/N * , is by far the largest. In addition, observation of any features in the nππ distribution required a selection around the ∆(1232) in the nπ distribution. To enhance events from the process of interest, the photon beam energy was selected to be higher than 4.8 GeV. Furthermore, the peripherality condition was imposed by requiring that the four-momentum transfer squared from the photon to the 3π system, −t ′ , to be less than 0.4 GeV 2 . In addition, since the two positively charged pions are the pions most likely to take part in the production of the baryon resonance, only forward-going π + were selected. The cut was defined as θ lab (π + ) ≤ 30 • . In the remainder of this report, we refer to the latter two cuts as the "excited baryon rejection" cuts. Figure 2. Background processes: 2π system recoiling off the nπ (left), Single π recoiling off the nππ (right). Data Distributions The missing mass off the π + π + π − for low −t ′ events is showing in Fig. 3. The neutron peak sits on top of a linearly increasing background, with a signal to background ratio of approximately 9 : 1. The region between the lines, (0.884 ≤ mm ≤ 0.992) GeV, indicates the neutron selection cut. A Gaussian plus a 1 st order polynomial fit to the distribution in this region, gives a mass of 0.942 GeV and 25 MeV for σ. GeV 2 ). The first peak is at the mass of the neutron and the second peak is most likely associated with nγ or nπ 0 production. The inset shows a Gaussian plus a first-order polynomial fit to the peak, giving a mass of 0.942 GeV and 25 MeV for σ. The −t ′ distribution, defined as −t ′ = −(t − t min ), with −t the fourmomentum transferred squared from the photon to the 3π system, is shown in the left plot of Fig. 4. The shape of the distribution is consistent with the characteristics of peripheral production. The distribution after the "excited baryon rejection" cuts, as defined in Sec. 3, is fit to an exponential function of the form f (t ′ ) = a e −b|t ′ | , over the range (0 ≤ −t ′ ≤ 0.4) GeV 2 . The exponential constant, b = 4.4 GeV −2 is consistent with π and ρ exchange 13 . In the 3π invariant mass spectrum shown in the right plot of Fig. 4 two enhancements are evident, one in the 1300 MeV region, and another in the 1600−1700 MeV mass range. Figure 4. Left: t ′ = t − t min from the beam to the π + π + π − system. The shaded histogram shows the distribution after choosing forward-going positively charged pions. The distribution is fit to an exponential of the form a e −b|t ′ | , with b = 4.4 GeV 2 . Right: π + π + π − invariant mass distribution. The shaded histogram shows the distribution for events which passed the "excited baryon rejection" cuts, discussed in Sec. 3. Figure 5 shows all three possible combinations of the nπ and ππ invariant mass distributions. In this analysis, the two positively charged pions were sorted based on momentum, with the π + 1 being the more energetic of the two. The two nπ + combinations show peaks around the known baryon resonances, ∆(1232), N ⋆ (1520), and N ⋆ (1680), while the nπ − shows a peak around the ∆(1232) only, as is expected due to isospin considerations. It is clear from the shaded distributions that the baryon resonance peaks are significantly suppressed after the "excited baryon rejection" cuts. The neutral 2π effective mass distributions show signals around the mass of the ρ(770) and the f 2 (1270), as well as a shoulder at the mass of the f 0 (980). The doubly-charged 2π combination does not show any distinct features, indicative of the lack of an isospin I = 2 state. Partial Wave Analysis The purpose of Partial Wave Analysis is to parameterize the observed intensity distribution in terms of a complete set of physically meaningful variables. In the formalism used here, the set of variables are the physical intermediate states produced in the reaction. This allows a direct extraction of the spin and parity of the states contributing to the total intensity and the determination of the resonance behavior and properties such as the mass, the width, and the decay properties of the produced states. The details of the formalism and the code are discussed elsewhere 14,15 . Here, we only mention the basic idea and the assumptions made in the procedure. PWA Formalism In the analysis presented here, we have assumed that the production process (see Fig. 1) is dominated by t-channel production of a 3π system with Reggeon exchanges between the photon beam and the proton target and that after the "excited baryon rejection" cuts, the background processes (see Fig. 2) with either one or two pions recoiling off a ∆/N * are significantly 8 reduced. The reaction γp → X + n, X + → π + π − π + is shown diagrammatically in Fig. 6, with the production of X + in the Center of Mass (C.M.) frame, and its sequential decay to an isobar, I, and a π, followed by the decay of the isobar into the remaining 2π. The differential cross section for the reaction is given by: with θ, the polar scattering angle of the X + in the C.M. frame (withẑ in the beam direction), M , mass of the 3π system, M, the Lorentz-invariant amplitude, dρ(τ ) = p cm dτ , the phase-space element with p cm the breakup momentum of the C.M. system and τ a set of five kinematic variables required to describe the 3π system. For τ we have chosen Ω GJ : (θ GJ , φ GJ ), the Gottfried-Jackson angles describing the X + → I π + decay in the X rest frame, Ω h : (θ h , φ h ), the helicity angles describing the I → π + π − decay in the Isobar rest frame, and w, the mass of the isobar. Figure 6. Photo-production of the 3π system, X + , shown schematically in the C.M. frame, with the sequential decay of X + to an isobar, I, and a π, in its rest frame, and the decay of I into the remaining two pions. pcm and θ represent the breakup momentum and polar angle of the X in the C.M. frame. p represents the breakup momentum of I in the X rest frame, and q, the breakup momentum of one of the pions from the decay of I in its rest frame. For the PWA presented here, the data are binned in t and the 3π mass, M . Since d t ∝ p cm d cos(θ), the differential cross-section in a given t and M bin can be written as: The intensity distribution in terms of M is given by: Once we have assumed an interaction mechanism, i.e. in our case, t-channel 3π production through reggeon exchange between the photon beam and the proton target, we can write M in terms of the transition operator,T : The above transition matrix element can be re-written in terms of production of intermediate mesonic states, X, each with a unique set of quantum numbers, I G J P C m, where I denotes the isospin, G the G-parity, J the total spin, P the parity, C the charge-conjugation, and m the z-projection of the spin (chosen along the beam direction). Quantum-mechanically, this corresponds to expanding M in terms of a complete set of states, |X , with Assuming thatT is separable into two parts,T p andT d , whereT p andT d are the production and decay operators for the intermediate states, X, then M can be re-written: The PWA formalism adopted here is based on the isobar model 16 , where the decay process of X to 3π is described through a series of sequential two-body decays as shown in Fig. 7: Ex γ n π X Production π π p Decay I + L − + l + Figure 7. t-channel exchange production of the possible 3π states, X + , with their subsequent sequential decay, X + → I π + followed by the isobar decay, I → π + π − . The inclusion of all possible isobars in the decays in: Then I(τ ) can be written as: Here k is the number of possible external spin configurations which in the case of an unpolarized photon beam and target, can be up to 8; α : {J, P, C, m, L, l, (w, Γ)} is the set of quantum numbers describing a given state and its decay (as mentioned earlier, J, P , and C are the J P C quantum numbers of X, and m, the z-projection of its spin, J); L and l are the orbital angular momenta between I and π in X → I π decay, and between the two pions in I → π π decay; w and Γ are the mass and the width of the isobars used in their Breit-Wigner description. The angular dependencies of the decays are handled using the Wigner D-functions and the appropriate Blatt-Weisskopf angular momentum barrier factors are used for a given L and l involved in a decay chain. The decay amplitudes are constructed as eigenstates of reflectivity to take advantage of the parity conservation in the production process 17 . This choice reduces the possible number of external spin configurations by a factor of 2 and reduces the spin-density matrix to a block-diagonalized form, where there is interference only between amplitudes of the same reflectivity, ǫ. To see how well the description given above fits the data, event-based maximum likelihood fits are used in 40 MeV mass bins of 3π in the t range 0 ≤ t ≤ 0.4 GeV 2 . The extended likelihood function is written as the product of the probabilities for finding n events in a 3π mass bin: The normalization term in the denominator of Eq. 11 is determined through the calculation of normalization integrals over a mass bin. The finite experimental acceptance of the detector, η(τ ), is determined by the Monte Carlo method, where a set of 3π events in the 1 − 2 GeV mass range with the same t distribution as the final data set were generated according to phase space in 20 MeV wide bins. The number of generated events in each bin was chosen such that the number of final accepted Monte Carlo events were 15 times greater than for the data in a given bin. The average acceptance as a function of 3π mass is a smoothly varying distribution and is on the order of 4%. The Monte Marlo events were subjected to the same analysis cuts as the data. For practical reasons −ln(L) is minimized b by varying the production amplitudes as parameters in the fit, Eq. 11 is rewritten as: where η x = Ma Mr is the M.C. acceptance with M a and M r the number of accepted and raw M.C. events in a given 3π mass bin and ǫ Ψ a αα ′ represents the normalization integral, calculated for the accepted M.C. sample, defined as: The number of events as predicted by the fit is given by: with ǫ Ψ r αα ′ , the normalization integral calculated for the raw M.C. sample in a given 3π mass bin. Preliminary PWA Results The PWA results shown here are still very preliminary. In the choice of waves included in the fits we have taken a "minimalistic" approach, where only a minimal set of states are included in the fits. The list of 35 + 1 waves used in the PWA fit for which we present results here, is shown in Table 1. Since the partial waves are represented by complex amplitudes, 35 waves corresponds to 70 parameters plus one parameter for the non-interfering Background term. b rather than maximizing the L. Table 1. Set of partial waves used in the PWA fit. Background 1 Fig. 8 shows the intensity distributions for various wave sets, as extracted from PWA. Each point on the plots is a result of an independent fit. The fit is a rank 1 fit. The strongest signal observed is in the 2 ++ intensity, at the mass of the a 2 (1320). The width of the signal (∼ 120 MeV) is in agreement with the PDG value. The next strongest signals are observed with approximately the same strength in the 1 ++ and 2 −+ intensities. The 1 ++ intensity shows an enhancement at the mass of the a 1 (1260), and the 2 −+ intensity shows strength at the mass of π 2 (1670). The 0 −+ intensity shows some enhancement around the region where the π(1800) is expected. The exotic 1 −+ wave intensity shows a peak at approximately 1700 MeV with a narrow width of ∼160 MeV. The quality of the PWA fit results are determined by comparing various data distributions with the data set as predicted by the PWA fit results. The predicted data set are obtained by weighting the accepted phase space Monte Carlo events by the results of the PWA. In Fig. 9 we show the 2π and nπ distributions for the data (red shaded histograms) and the PWA predicted data set (black points). As can be seen, there is a good qualitative agreement between the fit results and the data. There is some disagreement between the two data sets in the nπ + 2 distribution. Since π + 2 is the lower momentum of the two positively charged pions, it is more likely to be associated with the baryon resonance production in the lower vertex. The disagreement, is therefore, indicative of the level of this background (i.e. remaining ∆/N * after "excited baryon rejection" cuts. Conclusions We have performed a PWA on a sample of ∼84, 000 π + π + π − events from the g6c experiment at CLAS. The PWA fit results are very encouraging but not finalized. From the results shown in Fig. 8 we can see fluctuations from bin to bin in the fit results. This could be an indication that for some bins, we do not have the fit with the best likelihood value, but results from Figure 9. Quality of the PWA fit results. Comparison of various distributions between the "experimental data" (red shaded histograms) and the "predicted data" (black points) as determined by PWA. Top: ππ invariant mass distributions. Bottom: nπ invariant mass distributions. one of the local minima. To remedy the situation, we will perform many fits in a given bin for the same set of input waves, but different random starting values for the parameters and look for the best solution. We have also recently performed "tracking" fits where the results for the parameters in a given bin are used as the starting values for the fit in the adjacent bin. We see improvement in the continuity of the fit results from bin to bin after tracking. The number of waves used in the lower 3π mass region can be reduced further by using a different set of input waves for the low (1.0 − 1.4) GeV and high (1.4 − 2.0) GeV 3π mass region. For instance, all f 2 (1270)π waves can be eliminated for fits below the 1.4 GeV mass range due to threshold considerations. We will also try different modeling of the background term which at the present is included as a non-interfering isotropic wave. A Mass Dependent (M.D.) fit to the results of the final PWA fits will allow us to extract resonance parameters of the observed states.
2019-04-14T02:27:01.880Z
2003-05-14T00:00:00.000
{ "year": 2003, "sha1": "82c64e37baf5e746fb6a3046d0159d512f1eeab2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "82c64e37baf5e746fb6a3046d0159d512f1eeab2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
244832275
pes2o/s2orc
v3-fos-license
Modeling and Simulations of 4H-SiC/6H-SiC/4H-SiC Single Quantum-Well Light Emitting Diode Using Diffusion Bonding Technique In the last decade, silicon carbide (SiC) has emerged as a potential material for high-frequency electronics and optoelectronics applications that may require elevated temperature processing. SiC exists in more than 200 different crystallographic forms, referred to as polytypes. Based on their remarkable physical and electrical characteristics, such as better thermal and electrical conductivities, 3C-SiC, 4H-SiC, and 6H-SiC are considered as the most distinguished polytypes of SiC. In this article, physical device simulation of a light-emitting diode (LED) based on the unique structural configuration of 4H-SiC and 6H-SiC layers has been performed which corresponds to a novel material joining technique, called diffusion welding/bonding. The proposed single quantum well (SQW) edge-emitting SiC-based LED has been simulated using a commercially available semiconductor device simulator, SILVACO TCAD. Moreover, by varying different design parameters, the current-voltage characteristics, luminous power, and power spectral density have been calculated. Our proposed LED device exhibited promising results in terms of luminous power efficiency and external quantum efficiency (EQE). The device numerically achieved a luminous efficiency of 25% and EQE of 16.43%, which is at par performance for a SQW LED. The resultant LED structure can be customized by choosing appropriate materials of varying bandgaps to extract the light emission spectrum in the desired wavelength range. It is anticipated that the physical fabrication of our proposed LED by direct bonding of SiC-SiC wafers will pave the way for the future development of efficient and cost-effective SiC-based LEDs. Introduction The phenomenon of electroluminescence was observed for the very first time in the crystal of silicon carbide (SiC) in 1907. At that time of earlier development stages, the light emission process through semiconductors was not well understood [1]. However, a significant progress in the field of solid-state light-emitting diodes (LEDs) had been made in the 1990s with the development of the first blue LED [2]. In recent years, the astonishing features of solid-state LEDs such as high luminous efficiency, high color rendering index (CRI), and longevity have made them promising candidates to replace conventional light sources [3]. The current research trends involve a systematic investigation of a variety of organic/inorganic materials for a wide range of optoelectronic applications [4][5][6][7][8][9][10]. SiC is a wide bandgap compound semiconductor material that is highly suitable for optoelectronic applications involving operations at higher temperatures. It possesses high thermal stability, enhanced electric field maintenance, and a remarkable physical strength compared to that of Silicon (Si). It is also highly suitable for light-emitting applications. SiC exists in several polytypic forms among which 4H-SiC and 6H-SiC have gained attention due to their attractive physical and electronics attributes [11]. In recent years, SiC has attained significant importance for many quantum scale optoelectronic applications, especially for single-photon emitter (SPE) applications [12]. Nuclear spins of SiC can exist in spin-free states [13] that allow color coherence and long coherence times in SiC quantum devices [14,15]. In SiC-based quantum optoelectronic devices, charge injection results in an increased background emission of light [16]. But although the generation of single photons has been reported in several systems, none of them is suitable for room temperature applications (especially in telecommunication services). The development of SiC LEDs with light emission in the visible and near-infrared regions have also been reported in the literature [17]. Doped SiC can give high quantum efficiencies to provide high donor-to-acceptor luminescence [18]. Moreover, SiC is an extraordinary candidate for high-power optoelectronic applications. Its light emission characteristics have several attractive attributes such as higher efficiency, higher electrical conductivity, longer lifetime, and higher output electrical power compared to that of the conventional LEDs. In conventional LEDs, for high-power applications, heat dissipation becomes a crucial challenge that must be overcome. There are many other factors such as current crowding, Auger recombination, electron leakage, and the polarization effect that degrade the performance of the device in terms of efficiency [19]. Overheating of LEDs can be avoided by using SiC due to its extremely low thermal expansion and high thermal conductivity. The light emission and extraction from SiC-based LEDs have not been discussed in detail in the literature since it is extremely challenging to understand its optoelectronics attributes [20,21]. The focus of recent research has been to improve the performance of the LEDs with different fabrication techniques [22,23]. In order to extract the maximum performance from the devices, the majority of the modern solid-state devices have been designed and developed using heterostructure configuration. The development of various optoelectronic devices (i.e., LEDs and lasers) is based on the realization of the double heterostructure configuration. The benefits of utilizing a double-heterostructure scheme in LED include optical confinement as well as minimizing the reabsorption probability of emitted photons [24]. The motivation of our research is to contribute to the theoretical research by investigating the fabrication feasibility of SiC-based LEDs using a novel technique; called diffusion bonding/welding. For this purpose, a SiC-based edge-emitting LED has been simulated in this work via a novel technique of diffusion welding. The edge-emitting LEDs have been primarily used as a light source in fiber optic communication links. The diffusion welding or diffusion bonding method is used to join dissimilar materials (usually for making different metal-metal and metal-semiconductor interfaces). The working principle is based on the solid-state diffusion process in which atoms diffuse from a higher concentration to a lower concentration. This method has several economic advantages over state-of-the-art epitaxial techniques. This process reduces the fabrication complexity, processing time, and allows high material utilization [25]. This method has been rarely used to make semiconductorsemiconductor interfaces. The state-of-the-art epitaxial growth techniques for fabricating a SiC-SiC heterostructure are complex and require higher deposition temperatures [26]. The device fabrication from the conventional epitaxial growth techniques results in the wastage of semiconductor materials due to evaporation of the materials [27][28][29]. For the purpose of analysis and attaining the fabrication feasibility, the prototype device's performance parameters are first optimized by employing simulators before the commencement of the physical fabrication process. In this work, the fabrication feasibility of our proposed LED structure has been investigated using a commercially available semiconductor device simulator (SILVACO TCAD) via the diffusion welding approach. This technique has been successfully used to fabricate metallic contacts for SiC-based power electronics devices. However, it has rarely been used for the fabrication of SiC-SiC based heterostructures due to the extreme hardness of SiC wafers and their ability to withstand higher temperatures. Physical fabrication of SiC-SiC heterostructures using our proposed technique could be a breakthrough in semiconductor processing technology [30]. Our research group at Tallinn University of Technology has been working to fabricate SiC-SiC heterostructures by direct bonding of wafers for the last few years. Jana et al. proposed and fabricated SiC diode-based voltage doubler by using the diffusion welding principle. They implemented aluminum foil as a connecting material to join the diodes [31]. A similar concept has been reported by Natalja et al. to join SiC-based diodes [32]. Oleg et al. successfully fabricated a prototype of SiC-based Schottky diodes stack by the diffusion welding process [33]. It is pertinent to mention that the fabrication of our proposed/simulated device may need wafer surface treatment so that the surface may appear smooth and homogenous. The optimized temperature and pressure required to bond the wafer successfully need to be investigated for prospective physical fabrication of our proposed device. As far as the epitaxial growth of 6H-SiC is concerned, it has been reported in the literature. Epi-layer growth of 6H-SiC on 6H-SiC substrate has been performed via chemical vapor deposition in the past [34]. Satoshi Tamura et al. successfully deposited high purity 6H-SiC by cold chemical vapor deposition [35]. Tsunenobu et al. also reported epitaxial growth of 6H-SiC in a step-controlled epitaxy [36]. Deposition of SiC epitaxial layers with chemical vapor deposition has also been reported in the literature [37]. Surface treatment of thick wafers of 4H-SiC is required to get a smooth surface for better physical bonding of SiC-SiC wafers using diffusion bonding/welding techniques. Chemical etching of 4H-SiC in the presence of Platinum catalyst could be a promising technique for polishing and flattening SiC wafer to get better adhesion for the diffusion welding process [38]. Catalytic etching is a damage-free technique for the polishing of 4H-SiC wafers [39]. In this article, a novel design of SiC-SiC based double hetero-structure edge-emitting LED has been proposed using the diffusion welding process. In the SILVACO TCAD simulator, a thin active layer of intrinsic 6H-SiC has been sandwiched between heavily doped P-type and N-type 4H-SiC to make a single quantum-well (SQW) double heterostructure 4H-SiC/6H-SiC/4H-SiC LED. To the best of our knowledge, this configuration of SiC LED with our proposed technique has not been reported yet in the literature. Physical fabrication of the proposed LED structure with diffusion welding/bonding will simplify the fabrication process and reduce the device fabrication cost (as the diffusion welding process is simple with a low running cost). Materials and Methods This section elucidates the methodology that has been adopted for the simulations of the proposed device. In the diffusion welding technique, dissimilar thick material sheets are pressed against each other under high pressure and temperature to make their junctions. The schematic of the process has been shown in Figure 1. It has been shown in this figure that a thick sheet of material-A is being joined with material-B under high pressure and temperature. The same concept has been used for the microscale simulations of our proposed SiC-based SQW LED with a SILVACO TCAD. For our simulated device, the diffusion welding technique has been realized by joining the thick wafers of SiC polytypes directly with each other instead of depositing epitaxial layers. Different simulation steps have been shown in Figure 2. A 300 µm thick wafer of N-type 4H-SiC with a surface area of 1 × 1 µm 2 has been defined in SILVACO TCAD (as shown in Figure 2a). In the next step, 1 µm thin epi-layer of intrinsic 6H-SiC has been deposited on N-type 4H-SiC substrate to form a stack, as shown in Figure 2b. In the final step, stack B (N-type 4H-SiC and intrinsic epi-layer of 6H-SiC) has been joined with a 300 µm thick wafer of P-type 4H-SiC (Stack A) same as conducted in the diffusion bonding approach (which is illustrated in Figure 2c). 4H-SiC wafers are commercially available with a thickness of 300 µm. Keeping the prospective physical fabrication and availability of the materials in mind, the same thickness (i.e., 300 µm) has been chosen for simulations. epi-layer of 6H-SiC) has been joined with a 300 µ m thick wafer of P-type 4H-SiC (Stack A) same as conducted in the diffusion bonding approach (which is illustrated in Figure 2c). 4H-SiC wafers are commercially available with a thickness of 300 µ m. Keeping the prospective physical fabrication and availability of the materials in mind, the same thickness (i.e., 300 µ m) has been chosen for simulations. Heavily doped N-and P-type 4H-SiC layers with concentrations of 10 18 cm −3 have been used in the simulated LED. The final simulated structure based on the above-mentioned steps has been depicted in Figure 3. This type of LED is called edge-emitting LED as light is being emitted from the edges (not from the surface), as shown in Figure 3. Edge emitting LEDs are used in optical communication links as a source of light [40]. As the bandgap of 4H-SiC is larger than that of 6H-SiC, therefore an active layer of intrinsic 6H-SiC forms a quantum-well between 4H-SiC wafers (as illustrated in Figure 4a). An emitted photon from the quantum-well is shown with a red color wave of light in Figure 4a. epi-layer of 6H-SiC) has been joined with a 300 µ m thick wafer of P-type 4H-SiC (Stack A) same as conducted in the diffusion bonding approach (which is illustrated in Figure 2c). 4H-SiC wafers are commercially available with a thickness of 300 µ m. Keeping the prospective physical fabrication and availability of the materials in mind, the same thickness (i.e., 300 µ m) has been chosen for simulations. Heavily doped N-and P-type 4H-SiC layers with concentrations of 10 18 cm −3 have been used in the simulated LED. The final simulated structure based on the above-mentioned steps has been depicted in Figure 3. This type of LED is called edge-emitting LED as light is being emitted from the edges (not from the surface), as shown in Figure 3. Edge emitting LEDs are used in optical communication links as a source of light [40]. As the bandgap of 4H-SiC is larger than that of 6H-SiC, therefore an active layer of intrinsic 6H-SiC forms a quantum-well between 4H-SiC wafers (as illustrated in Figure 4a). An emitted photon from the quantum-well is shown with a red color wave of light in Figure 4a. Heavily doped N-and P-type 4H-SiC layers with concentrations of 10 18 cm −3 have been used in the simulated LED. The final simulated structure based on the abovementioned steps has been depicted in Figure 3. This type of LED is called edge-emitting LED as light is being emitted from the edges (not from the surface), as shown in Figure 3. Edge emitting LEDs are used in optical communication links as a source of light [40]. As the bandgap of 4H-SiC is larger than that of 6H-SiC, therefore an active layer of intrinsic 6H-SiC forms a quantum-well between 4H-SiC wafers (as illustrated in Figure 4a). An emitted photon from the quantum-well is shown with a red color wave of light in Figure 4a. Furthermore, we considered a direct bonding process for our simulations since the diffusion welding process is also applicable for extremely thin sheets of materials [25]. Our proposed SiC-based SQW-LED can also be fabricated by the integration of epitaxial growth (a state-of-the-art technique used to fabricate SiC device) [41] and diffusion bonding technique (solid-state material joining technique) [25,42]. Several appropriate physical models related to LEDs have been used during the simulations. Polarization modeling is critical for Wurtzite (WZ) materials as the effect of polarization in WZ materials can cause quantum confinement in QW LEDs. This phenomenon plays an important role in the reduction of radiative recombination in LEDs [43]. The polarization model has been implemented and enabled by the POLARIZATION command in MODEL statement in SILVACO TCAD. Furthermore, we considered a direct bonding process for our simulations since the diffusion welding process is also applicable for extremely thin sheets of materials [25]. Our proposed SiC-based SQW-LED can also be fabricated by the integration of epitaxial growth (a state-of-the-art technique used to fabricate SiC device) [41] and diffusion bonding technique (solid-state material joining technique) [25,42]. Several appropriate physical models related to LEDs have been used during the simulations. Polarization modeling is critical for Wurtzite (WZ) materials as the effect of polarization in WZ materials can cause quantum confinement in QW LEDs. This phenomenon plays an important role in the reduction of radiative recombination in LEDs [43]. The polarization model has been implemented and enabled by the POLARIZATION command in MODEL statement in SIL-VACO TCAD. Additionally, an analytical low field mobility model that is dependent on temperature and concentration of charge carriers has also been implemented. In this model, electron and hole mobilities are first defined by MUNO and MUPO parameters. Then, this model is enabled by the ANALYTIC statement under MODELS. This model relates the Furthermore, we considered a direct bonding process for our simulations since the diffusion welding process is also applicable for extremely thin sheets of materials [25]. Our proposed SiC-based SQW-LED can also be fabricated by the integration of epitaxial growth (a state-of-the-art technique used to fabricate SiC device) [41] and diffusion bonding technique (solid-state material joining technique) [25,42]. Several appropriate physical models related to LEDs have been used during the simulations. Polarization modeling is critical for Wurtzite (WZ) materials as the effect of polarization in WZ materials can cause quantum confinement in QW LEDs. This phenomenon plays an important role in the reduction of radiative recombination in LEDs [43]. The polarization model has been implemented and enabled by the POLARIZATION command in MODEL statement in SIL-VACO TCAD. Additionally, an analytical low field mobility model that is dependent on temperature and concentration of charge carriers has also been implemented. In this model, electron and hole mobilities are first defined by MUNO and MUPO parameters. Then, this model is enabled by the ANALYTIC statement under MODELS. This model relates the Additionally, an analytical low field mobility model that is dependent on temperature and concentration of charge carriers has also been implemented. In this model, electron and hole mobilities are first defined by MUNO and MUPO parameters. Then, this model is enabled by the ANALYTIC statement under MODELS. This model relates the low-field carrier's mobility to the concentration of impurities and temperature. This model was proposed by Caughey and Thomas [44,45]. The specific material's parameters for this model related to 4H-SiC and 6H-SiC are given by Lades [46]. The bandgap narrowing model is an important parameter for heavily doped devices. As N-and P-type regions of the proposed LED are heavily doped, this model plays a crucial role in assessing electrical performance. This model has been enabled by using the BGN statement. Shockely-Read-Hall model is used to fix minority charge carrier's lifetime and it is enabled by SRH statement in SILVACO TCAD. Similarly, the Auger (AUGER) model is used for the direct transition of carriers. Selberherr's model has been used for temperature-dependent parameters and it is enabled by the IMPACT SELB statement. Several material's related parameters such as bandgap, permittivity, electron affinity, the electron density of states, holes density of states, charge carrier's lifetime, and mobility of charge carriers have been used during the implementation of different physical models. Most of the materials-related parameters have been taken from Atlas user's manual [47] and are written in Table 1. Results In this section, the results obtained from simulated LED have been included and discussed in detail. Several results demonstrating the performance of the simulated device like energy band profile, current-voltage (IV) characteristics, luminous power, and power spectral density as a function of wavelength have been presented. The plots are generated through SILVACO TCAD Software using TONYPLOT (graphical module of TCAD to visualize results of simulated devices). TONYPLOT is a graphical user interface that is used with all SILVACO simulators to generate graphs. A comprehensive detail about this graphical user interface can be found at this reference [48]. Energy Band Profile and IV-Characteristics of Simulated LED Energy band profile as a function of LED depth has been shown in Figure 5. The simulated device is a double hetero-structure, containing a quantum-well formed by intrinsic 6H-SiC between p-type 4H-SiC and n-type 4H-SiC. Band bending at each heterojunction interface can be observed in Figure 5. The charge carrier can diffuse easily through the hetero-junction interface layers of the simulated device. Diffusion and tunneling are the key mechanisms that govern the transportation of charge carriers through heterostructure devices. The diffusion process controls the uniformity of current spreading, whereas the tunneling phenomenon contributes to limit the operational voltage of the device [49]. When an external voltage is applied at the electrodes of the device, the charge carriers spread widely due to the presence of the QW in the structure. A bias voltage is applied and current density has been measured for the simulated device as shown in Figure 6. The device starts to conduct a significant amount of current at approximately 2.6 V. After turning on, the current density starts to increase gradually and reaches to a maximum value of 39 kA/cm 2 at approximately 6 V as shown in Figure 6. The formation of the QW in the simulated device indorses the vertical injection of the current and consequently reduces the turn-on voltage of the device. applied and current density has been measured for the simulated device as shown in ure 6. The device starts to conduct a significant amount of current at approximately 2 After turning on, the current density starts to increase gradually and reaches to a m mum value of 39 kA/cm 2 at approximately 6 V as shown in Figure 6. The formation o QW in the simulated device indorses the vertical injection of the current and conseque reduces the turn-on voltage of the device. Luminous Power of Simulated SiC LED A voltage bias of 0 to 6 V has been applied at the anode of the simulated LED analyze the luminous power output of the simulated LED, we need to know the cu density (Js). Luminous power output as a function of current density has been give Figure 7. At turn-on voltage of 2.6 V, the current density reaches the value of app mately 5 kA/cm 2, and luminous power at this point is 4 × 10 −05 W/cm, which is quite Current density of simulated LED gradually increases after turn-on voltage. At bias age of 6 V, the current density reached its maximum value, as shown in Figure 6. At maximum value of 39 kA/cm 2 current density, luminous power is 0.00028 W/cm. hough it is a SQW LED, still the luminous power is not so low. If we add multiple q tum wells (MQW) in this structure, this would definitely increase the luminous po output of the device. Because LED devices based on periodic MQW have high lumi power due to several reasons. Luminous Power of Simulated SiC LED A voltage bias of 0 to 6 V has been applied at the anode of the simulated LED. To analyze the luminous power output of the simulated LED, we need to know the current density (J s ). Luminous power output as a function of current density has been given in Figure 7. At turn-on voltage of 2.6 V, the current density reaches the value of approximately 5 kA/cm 2 , and luminous power at this point is 4 × 10 −05 W/cm, which is quite low. Current density of simulated LED gradually increases after turn-on voltage. At bias voltage of 6 V, the current density reached its maximum value, as shown in Figure 6. At this maximum value of 39 kA/cm 2 current density, luminous power is 0.00028 W/cm. Although it is a SQW LED, still the luminous power is not so low. If we add multiple quantum wells (MQW) in this structure, this would definitely increase the luminous power output of the device. Because LED devices based on periodic MQW have high luminous power due to several reasons. MQWs increase current spreading and charge carriers' confinement. Ionization of electrons at high bias voltages increases the possibility of recombination of charge carriers [50]. However, we only introduced a SQW to reduce the complexity in fabrication and cost of the device. Power Spectral Density of Simulated SiC LED Power spectral density as a function of wavelength has been measured at two fixed voltages (4 V and 6 V) in TCAD. Emitted power spectral density of the simulated LED has been extracted using the TONYPLOT module of TCAD and shown in Figure 8. Our proposed device obtained a power output of 7 W/(cm·eV) at the wavelength of λ = 405 nm for fixed Vbias = 4 V, as shown in Figure 8 (curve-1). Whereas, the same device achieved a power output of 21 W/(cm·eV) at λ = 410 nm for Vbias = 6 V, as shown by curve-2 of Figure 8. For QW SiC-based LED height and width of spectral density would be higher compared to that of without QW. We could not find any similar edge LED structure in the literature MQWs increase current spreading and charge carriers' confinement. Ionization of electrons at high bias voltages increases the possibility of recombination of charge carriers [50]. However, we only introduced a SQW to reduce the complexity in fabrication and cost of the device. Power Spectral Density of Simulated SiC LED Power spectral density as a function of wavelength has been measured at two fixed voltages (4 V and 6 V) in TCAD. Emitted power spectral density of the simulated LED has been extracted using the TONYPLOT module of TCAD and shown in Figure 8. Our proposed device obtained a power output of 7 W/(cm·eV) at the wavelength of λ = 405 nm for fixed V bias = 4 V, as shown in Figure 8 (curve-1). Whereas, the same device achieved a power output of 21 W/(cm·eV) at λ = 410 nm for V bias = 6 V, as shown by curve-2 of Figure 8. For QW SiC-based LED height and width of spectral density would be higher compared to that of without QW. We could not find any similar edge LED structure in the literature for comparison that is purely based on SiC-SiC wafers considering the diffusion bonding approach. Calculations for Luminous Efficiency and External Quantum Efficiency of Simulated SiC LED The performance of LED is evaluated on the basis of percentage luminous efficiency and percentage external quantum efficiency (EQE). For these calculations, the simulated LED structure is fixed biased at V bias = 6 V. After that, the radiative recombination rate and total recombination have been measured at the current density of 39 kA/cm 2 using the TONYPLOT module. To calculate the percentage luminous efficiency of simulated LED, radiative recombination rate (r) has been divided with total recombination (T) and percentage luminous efficiency has been calculated, as given in Table 2. Our simulated structure showed 25% luminous efficiency. This efficiency is quite significant for a SQW LED. Furthermore, the percentage of external quantum efficiency of simulated LED has also been calculated. For these calculations, flux spectral density (Φ) of LED has been measured at the current density of 39 kA/cm 2 using the TONYPLOT module. Then this flux spectral density is multiplied with charge q, where q = 1.602 × 10 −19 Coulombs. Finally, this value is divided by bias current density to obtain external quantum efficiency. All these calculations and values have been given in Table 3. Our simulated LED showed 16.43% EQE. This efficiency is also quite good for SQW SiC LED. The formulae used for the calculation of luminous efficiency and quantum efficiency have been taken from the literature [50]. Both obtained efficiencies of our proposed LED can be optimized and customized numerically by choosing appropriate physical models and tuning of material parameters. Comparison of Luminous Efficiency and External Quantum Efficiency of Simulated SiC LED with Literature In this section, the luminous and external quantum efficiencies of simulated SiC LED have been compared with the similar LED structure reported in the literature. Epitaxial growth of SiC polytypes is extremely difficult and costly due to its ability to resist extremely high temperatures. Therefore, to avoid complications of the fabrication process and reduce the cost caused by the epitaxial growth techniques, a single QW has been used in our proposed LED structure. As we opt for the fabrication of SiC-based LED with a novel technique, called diffusion welding which is a cheap and simple fabrication technique. The comparison of luminous and external quantum efficiencies of our simulated SQW SiC LED has been done with the similar structure of GaN MQW LED reported in the literature. Despite the presence of SQW, our device exhibited impressive luminous and external quantum efficiencies compared to that of the MQW LED device reported in the literature [50]. The comparison of the results has been tabulated in Tables 4 and 5. The reason for the high luminous and external quantum efficiencies of the GaN-based LED device reported in the literature is the presence of MQWs. (i.e., device A, B, and C have been taken from the literature [50]). Conclusions In this article, a novel design of SiC-based edge-emitting LED has been simulated by realizing a novel technique; diffusion welding/bonding. We proposed a unique combination of SiC-SiC polytype-based heterostructure LED by direct bonding of SiC wafers. This type of SiC-based LED has not yet been reported in the existing literature since it is extremely challenging to join SiC wafers directly due to their ability to resist even extremely elevated temperatures. Considering the research gap, the comparative performance analysis of the devised LED structure has been carried out with GaN-based LED reported in the literature. However, it is noteworthy that comparative LED possesses the attributes of MQW, whereas our device has been realized with SQW. Moreover, we delineated a novel direct bonding technique for joining SiC-SiC wafers to reduce the complexity and fabrication cost. Our simulated LED device exhibited promising results in terms of luminous efficiency and external quantum efficiency. The simulated device achieved 25% luminous efficiency and 16.43% external quantum efficiency with only one quantum-well formed by the active layer of 6H-SiC. Prospective fabrication of our proposed device with diffusion welding will dramatically reduce the device cost since diffusion welding techniques has several economic advantages over state-of-the-art epitaxial techniques. This process is very simple and allows high material utilization. It reduces the fabrication complexity and processing time. The characteristics of our proposed LED device can be customized by choosing appropriate materials with varying bandgaps to obtain the wavelength of emitted light in the desired wavelength range. Catalytic etching has been proposed for polishing of 4H-SiC wafers to get a flatter surface at the atomic level in order to improve the bonding capability of SiC wafers for the prospective physical fabrication of the device.
2021-12-03T16:31:50.704Z
2021-11-30T00:00:00.000
{ "year": 2021, "sha1": "667943a0cc1cb23b8badbaf45052525848c910ab", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-666X/12/12/1499/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b40fb1865de9ad819d5c3d4d8b3077a43f546eb4", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
56337265
pes2o/s2orc
v3-fos-license
The study of clinical, biochemical and hematological profile in malaria patients Malaria is a protozoan disease transmitted by the bite of infected Anopheles mosquitoes. The most important of the parasitic diseases of humans, Malaria is a major cause of morbidity in the tropics, thus disease is of global importance that results in 300-500 million cases and 1.52.7 million deaths yearly. 1 Approximately 2.48 million malarial cases are reported annually from south Asia, of which 75% cases are contributed by India alone. 2 INTRODUCTION From the time of immortal, malaria has been one of the most prevalent of human disease. Malaria, the disease as old as humanity itself and often called as "King of disease". In ancient India, malaria was known as the king of disease because of it formed the vicious cycle of sickness, death and poverty. Malaria is a protozoan disease transmitted by the bite of infected Anopheles mosquitoes. The most important of the parasitic diseases of humans, Malaria is a major cause of morbidity in the tropics, thus disease is of global importance that results in 300-500 million cases and 1.5-2.7 million deaths yearly. 1 Approximately 2.48 million malarial cases are reported annually from south Asia, of which 75% cases are contributed by India alone. 2 Malaria is a febrile illness characterized by fever and related symptoms; however it is very important to remember that malaria is not a simple disease of fever, chills and rigors. The number of atypical presentations of malaria has gradually increased during the past few decades. 3 Malaria can present with non-specific symptoms like headache, fatigue, joint pain, vomiting, abdominal discomfort, myalgia followed by fever to severe complications like jaundice, acute renal failure, anaemia, shock, convulsions and coma. In fact in a malarious region where the endemicity of malaria is high it can presents with such varied and dramatic manifestations that malaria may have to consider as a differential diagnosis for almost in every clinical problems. A prompt and early diagnosis is important for effective management in malaria. Many acute febrile illnesses like viral fever, arboviral infections, enteric fever and leptospirosis occur in the tropics and it is difficult to distinguish malaria from these illnesses on clinical grounds alone. 4 Hematological and biochemical changes associated with malarial infection are well recognized, but specific changes may vary with the level of malaria endemicity, haematological and nutritional status, demographic factors and malarial immunity. 5 Malaria can affect single or multiple organs with different levels of severity. This study is an attempt to investigate the effects of severe malaria in infected patients on few hematological and biochemical parameters that could provide a credential clues in understanding malaria pathogenesis, diagnosis and management. Aims and objective • To study the incidence of malaria admitted in MY Hospital during the study period. • To study the various clinical (typical and atypical) manifestation of malaria. • To study the biochemical and hematological changes in cases of malaria. • To assess and evaluate the treatment outcomes. Method of collection of data Patient"s informed consent was taken. A detailed history, clinical examination and laboratory investigations including peripheral smear examination for malaria parasite, Malaria card test, hemoglobin, total wbc count, differential WBC count, platelet count, blood sugar, blood urea, serum creatinine, serum electrolyte, serum bilirubin direct, indirect and total, SGOT and SGPT, urine for routine microscopy, USG abdomen was done. Patients with suspected co-infections like enteric fever, dengue fever, sepsis, UTI, meningitis, encephalitis etc. were investigated and those who were found to have a specific cause were excluded from study. Inclusion criteria • All patients above 12 years of age. • Patient positive for malaria parasite by peripheral smear. Exclusion criteria • Patients less than 12 years of age. • Patient negative for malaria parasite by peripheral smear. • Patients having other co-infections like enteric fever, dengue fever, sepsis, UTI, meningitis, encephalitis etc. Patients diagnosed as malaria were registered and monitoring was done during the admission period. Various parameters (biochemical/hematological) changes were noted, various complications developed were noted. Associations were calculated. RESULTS The In our study 6.73% patients were drowsy and 8.65% were unconscious at the time of presentation. Increased tone was noted in 4.8% patients and decreased tone was noted in 5.76% patients. In our study, most common complication observed in malaria patients was hyperpyrexia (61.53%) followed by severe anaemia (34.61%) and most of the patients having hyperpyrexia developed more than 1 complication. In our study 95 (91.34%) patients got cured and 9 (8.65%) patient died. Most of the patients who died (77.7%) were infected by P.falciparum. The increase in the proportion of P.falciparum infection over P. vivax may be because of prevailing chloroquine resistance in P.falciparum. Widespread use of chloroquine might have suppressed P. vivax more than P.falciparum, thus increased incidence. One more cause of less incidence of P. vivax malaria could be low reporting to tertiary care centres as P. vivax malaria causes relatively mild illness. Age distribution In this study only Patients aged more than 12 year were included. The maximum numbers of cases were seen in between 12-30 years of age (58.65% The incidence of malaria is highest in younger physically active age groups in this study. This has been attributed to: 1. State of immunological balance against malaria also known as "premunition" which is achieved late in adulthood. 2. Indian demography suggests that the maximum population right now is of younger adults. 3. Increased chance of contracting the infection due to more outdoor activities in younger age group. Seasonal variation The maximum number of cases i.e. 51% was observed in the monsoon period i.e. July to September because the conditions were optimum for the development of malaria parasite and also during this period more water accumulation occurs which is suitable for breeding of mosquitoes. This observations corroborates with the findings of Kocher, et al. 6 Altered sensorium In our study, 13.46% patients presented with altered sensorium, which is due to plugging of smaller vessels of brain producing local hypoxia, edema, hyperpyrexia, severe anemia, hepatic dysfunction, acute renal failure and metabolic disturbances etc. all of which contribute to it. Similar results were seen in the study done by Kocher et al, Chetan J. Galande, et al. 6,10 Convulsions In our study 8 patients had Convulsions, out of which only 3 patients had severe malaria which is defined as more than 2 Convulsions within 24 hrs as per WHO criteria. Our results corroborates with the results obtained in study conducted by Kocher et al. 6 Anaemia In the present study 64.42% patients had anaemia. Severe anaemia i.e. hemoglobin less than 7gm/dl (as per who criteria) was observed in 34.61% of patients. Anemia in Malaria is multi factorial in origin. These factors include hemolysis of parasitized as well as non-parasitized cells, spleenic and reticular hyperactivity, genetic factors and oxidative stress and bone marrow suppression. 11,12 The nature of hematological abnormalities depends on the time after infection. A recent study has revealed a role of interleukins (IL-4) and interferon's (IFN-gamma) in erythropoietin suppression. 13,14 Inappropriately low reticulocytosis has been observed in malaria patients suggesting that insufficient erythropoiesis is major factor for anaemia. In our study 57% patients had normocytic normochromic anaemia and 25% of the patients had microcytic hypochromic type of anaemia. In our study Mean Hemoglobin level was 8.16±2.66gm/dl, 9.03±2.86 gm/dl and 6.46±2.97gm/dl in patients was suffered from P.falciparum, P. vivax infection and mixed infection. Renal involvement In our study 13 patients had oliguria while raised blood urea and S. creatinine was seen in 24.03% and 29.81% patients respectively. The maximum value of blood urea and serum creatinine was 160 mg/dl and 7.8 mg/dl respectively. Similar result was found in study conducted by Chetan J. Galande, et al in which renal involvement was seen in 31% cases. 10 CT/MRI brain Imaging of brain were done in 18 patients, only 1 patient had B/L cerebral edema rest all of the patients had normal imaging of brain. System involvement In our study the relative frequencies of involvement of various systems were: hematological involvement in 69% cases, hepatic involvement in 42.3%, renal involvement in 29.03%, neurological involvement in 28.84%, cardiovascular involvement in 16.43% and pulmonary involvement in 2.88% of cases. Similar result is seen in the study conducted by Y. Khatib, et al haematological system 72.3%, hepatic system 30%, renal system 21%. 15 Outcomes In our study out of the 104 patients, 9 (8.65%) patients died during the course of stay and 95 (91.34%) patients were cured. Similar results were seen in the study of Kocher, et al in which 10.93% died and in Chetan J Galande the mortality was 3%. 6,10 In our study the mortality rate was slightly higher as compare to other study. This might may be explained by the fact that majority of the death were seen in patients presenting late and with systemic involvement. Complication related to mortality with expired case In our study most of the deaths were due to multisystem involvement. In the patients who died of malaria, combined hematological and hepatic involvement were present in 100% of cases, neurological and renal involvement were seen in 77.7%, respiratory abnormality and hypoglycemia were found in 33.3% and shock was found in 66.6% cases. Similar pattern were observed in Kocher, et al. 6 CONCLUSION Malaria though potentially treatable, still kills many patients every year in India. The infection with P. falciparum and P. vivax causes significant changes in hematological and biochemical parameters in patients. The most common presentation of malaria is fever, so in endemic region malaria may be considered as a leading differential diagnosis in all patients presenting as acute febrile illness, especially patients who also have organomegaly, fall in hemoglobin level, thrombocytopenia and altered liver function tests. Malaria may be associated with life threatening complications such as cerebral malaria, severe anemia, acidosis, respiratory distress and acute renal failure (ARF). So it is vital to know and perform hematological and biochemical investigation to detect early complications and to treat them effectively. Inadvertent use of antimalarial in past has led to increase resistance in plasmodium which is one of the reason for failure of treatment and also of increasing incidence of P. falciparum. Due to lack of vaccination, developing resistance to drugs and changing presentation of illness, malaria still remains a major health problem. In order to contain this illness, further ongoing studies are warranted.
2019-03-13T13:30:00.031Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "5aeb4dd926a5540b79f465d1a6921bca5f6d2d4d", "oa_license": null, "oa_url": "https://www.ijmedicine.com/index.php/ijam/article/download/166/152", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b5b690461b48e774309a8fb4dc8ab74a90c6d044", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52973586
pes2o/s2orc
v3-fos-license
Activation of oxytocin neurons in the paraventricular nucleus drives cardiac sympathetic nerve activation following myocardial infarction in rats Myocardial infarction (MI) initiates an increase in cardiac sympathetic nerve activity (SNA) that facilitates potentially fatal arrhythmias. The mechanism(s) underpinning sympathetic activation remain unclear. Some neuronal populations within the hypothalamic paraventricular nucleus (PVN) have been implicated in SNA. This study elucidated the role of the PVN in triggering cardiac SNA following MI (left anterior descending coronary artery ligation). By means of c-Fos, oxytocin, and vasopressin immunohistochemistry accompanied by retrograde tracing we showed that MI activates parvocellular oxytocin neurons projecting to the rostral ventral lateral medulla. Central inhibition of oxytocin receptors using atosiban (4.5 µg in 5 µl, i.c.v.), or retosiban (3 mg/kg, i.v.), prevented the MI-induced increase in SNA and reduced the incidence of ventricular arrhythmias and mortality. In conclusion, pre-autonomic oxytocin neurons can drive the increase in cardiac SNA following MI and peripheral administration of an oxytocin receptor blocker could be a plausible therapeutic strategy to improve outcomes for MI patients. A n acute myocardial infarction (MI) is associated with severe damage to the myocardium that impairs cardiac function. This damage is exacerbated by sustained overstimulation of the nerves that control heart function, specifically sympathetic nerve activity (SNA) 1 . The initial increase in cardiac SNA within the first hours following MI is known to contribute, at least in part, to the generation of ventricular arrhythmias 1,2 , which is ultimately responsible for sudden heart failure and death 3 . Once established, this sympathetic hyper-excitation is essentially irreversible and facilitates permanent structural and functional damage of the heart 4 . Despite the advent of coronary reperfusion therapy and other advances in the clinical treatment of myocardial infarction, early arrhythmias facilitated by an increased SNA still often prove fatal 5 . Preventing sympathetic activation has therefore emerged as a promising target for the development of new therapeutic options because it targets the origin of the increase in SNA before it can contribute to sudden heart failure 6 . To date, the pathological mechanisms underpinning sympathetic activation following MI remain to be fully elucidated, although peripheral neural reflexes and central integrative pathways have been implicated 7 . Sympathetic traffic emanating primarily from the rostral ventral lateral medulla (rVLM) is shaped and refined by input from the nucleus tractus solitarius (NTS), which is the site where peripheral afferent signals converge. Although the rVLM and NTS have classically been considered the principal central nervous system (CNS) nuclei that modulate sympathetic outflow 8,9 , activation of the hypothalamic paraventricular nucleus (PVN) has now emerged as a key regulator in the sustained elevation in SNA, at least in chronic heart failure [10][11][12][13] . The PVN consists of a heterogeneous group of magnocellular and parvocellular neurons that are largely clustered into anatomically distinct divisions 14 . Magnocellular neurons, which project to the posterior pituitary, serve an endocrine function through the secretion of oxytocin or vasopressin directly into the circulation 15 . Within the parvocellular division, a distinct subpopulation of pre-autonomic neurons (some of which also release oxytocin or vasopressin) project to the rVLM [16][17][18] and have been implicated in the modulation of SNA 17,19 . To the best of our knowledge, no study has identified the neuronal pathways within the CNS that are activated to increase cardiac SNA immediately following MI. Yet, it is this early period, before the increase in SNA has become irreversibly established, where pharmacological intervention has the greatest opportunity of blocking or reversing sympathetic activation 20 to improve patient outcome. The main aim of this study was to identify whether preautonomic PVN neurons drive the early increase in cardiac SNA following acute MI. We used neuronal retrograde tracing and immunohistochemistry to show that acute MI selectively activates a sub-population of parvocellular pre-autonomic oxytocin neurons that project to the rVLM. These results are likely to be of clinical significance since the administration of an oxytocin receptor blocker, whether it be centrally (intracerebroventricular (i.c.v.)) or even peripherally (intravenous (i.v.)), effectively prevents the central-mediated increase in SNA following MI. MI increases c-Fos expression in the parvocellular PVN. Immunohistochemical staining of coronal brain sections for c-Fos, as a marker of neuronal activation, showed that acute MI triggered widespread neuronal activation throughout the PVN, with little c-Fos protein expression in other brain areas at the level of the PVN (Fig. 1). In particular, the number of c-Fos-positive neurons within the parvocellular PVN (pPVN) of MI rats was almost double that of sham rats (Fig. 1c). MI increases oxytocin, not vasopressin neuronal activation. Parvocellular PVN neurons predominantly express oxytocin or vasopressin 21,22 . Therefore, we used double label immunohistochemistry to determine which of these two phenotypes was primarily activated following acute MI. In this series of experiments, we again observed a higher number of c-Fos-positive neurons in the pPVN of MI rats compared to sham rats ( Fig. 2a-c). Although there appeared to be a higher number of oxytocin-positive neurons in the pPVN of MI rats compared to sham rats, it was not significant (Fig. 2d). Moreover, while the number of oxytocin-positive neurons that co-expressed c-Fos protein was higher in the pPVN of MI rats compared to that of sham rats (Fig. 2e), the proportion of oxytocin-positive neurons co-expressing c-Fos was similar for MI rats and sham rats (Fig. 2f). In contrast to oxytocin-positive neurons, the number of pPVN vasopressin-positive neurons was not different between MI and sham rats ( Fig. 3a-d). Similarly, there was no difference in the number (Fig. 3e) or the proportion (Fig. 3f) of vasopressinpositive neurons that co-expressed c-Fos. Activated parvocellular PVN neurons project to the rVLM. Having identified that acute MI selectively activates pPVN oxytocin neurons, we next determined whether these neurons project to the rVLM by injection of retrograde tracer into the rVLM (Supplementary Figure. 1) and subsequent co-staining pPVN neurons for c-Fos and either oxytocin ( Fig. 4a-h) or vasopressin ( Fig. 5a-d). A key result from this study is that the number and proportion of retrogradely labeled pPVN oxytocin-positive neurons co-expressing c-Fos was higher in MI rats compared to sham rats ( Fig. 4i-p). In contrast to oxytocin neurons, there were virtually no pPVN vasopressin neurons retrogradely labeled from the rVLM (Fig. 5g), which was similar for both MI and sham rats (Fig. 5g, h). Hence, it appeared that acute MI selectively activated rVLM-projecting oxytocin neurons in the pPVN. Oxytocin receptor blockade prevents sympathetic activation. Considering acute MI activates a distinct sub-population of pPVN pre-autonomic oxytocin neurons, we tested whether central oxytocin receptor antagonism could prevent sympathetic activation following acute MI. Moreover, to determine whether oxytocin receptor blockade could be a clinically translatable option for the treatment of MI, we assessed the efficacy of peripherally administered Retosiban, an oxytocin receptor blocker that can cross the blood-brain barrier, for preventing sympathetic activation. Acute MI-induced arrhythmogenesis. Cardiac arrhythmias were evident within the first minute of left anterior descending (LAD) coronary artery occlusion in all MI rats and persisted for at least 2 h post MI (Fig. 6a), although the incidence of arrhythmic episodes subsequently decreased with time (Fig. 6b). Importantly, the incidence of arrhythmias was lower in the MI rats that received an oxytocin receptor antagonist either centrally (atosiban, 4.5 µg in 5 µl, i.c.v.) or intravenously (retosiban, 3 mg/kg) (Fig. 6b). Cardiac sympathetic nerve activity. In sham rats treated with either atosiban (i.c.v.) or retosiban (i.v.), cardiac SNA remained stable for the 3 h recording period. In MI rats treated with saline, cardiac SNA progressively increased (190 ± 49%) over 3 h following the infarct (Fig. 7a). In contrast, the administration of either atosiban or retosiban immediately after the MI completely prevented any subsequent MI-induced increase in cardiac SNA (Fig. 7a). Arterial blood pressure and heart rate. There was no significant change in arterial blood pressure (ABP) or estimated heart rate (eHR) over 3 h post MI (or sham) in any of the groups (Supplementary Figure 2). Discussion Despite advances in the treatment of acute heart failure in recent decades, effective therapeutic strategies for averting or reversing sympathetic activation following acute MI have remained largely elusive because, at least in part, the pathological mechanisms responsible for increasing cardiac SNA following acute MI are yet to be fully elucidated. This study demonstrates that acute MI activates a distinct population of pPVN oxytocin neurons that project to the rVLM. Importantly, intravenous administration of the oxytocin receptor antagonist, retosiban, completely prevented the MI-induced increase in cardiac SNA, which likely contributed to the reduced incidence of ventricular arrhythmias and improve survival. Given that retosiban crosses the blood-brain barrier 23 , and that selective central blockade of oxytocin receptors (atosiban) prevents sympathetic activation, it appears likely that retosiban acts centrally to prevent oxytocin neuronal activation of the rVLM. Regardless of its site of action, retosiban has potential as a novel therapy for the acute treatment of MI. While the role of hypothalamic nuclei is well characterized for sustaining the increase in SNA in animal models of chronic heart failure [24][25][26] , one distinct finding of this study is that we have identified the pPVN as a crucial center for triggering the increase in cardiac SNA in the very early stages (within 90 min) following acute MI. We quantified neuronal activation using c-Fos immunohistochemistry. One limitation, however, is that c-Fos protein is not fully expressed until~90 min after the initial stimulus 27,28 , meaning we could not pinpoint the exact moment that the oxytocin neurons were activated. In spite of this limitation, one key advantage of using c-Fos immunohistochemistry is that, since the c-Fos is expressed only within the cell nuclei, it can be used in combination with tract-tracing procedures to identify the phenotypic variation of c-Fos-labeled neurons 29 . Accordingly, c-Fos protein has been extensively used for over three decades as a reliable neuronal marker to identify key brain regions involved with autonomic regulation 10,27,30-33 . The parvocellular division of the PVN comprises neurons that project either to the median eminence 19 , where they serve a neuroendocrine role to control anterior pituitary function, or to other areas of the brain including the spinal cord and rVLM, where they play a pre-autonomic role 16 . Moreover, pPVN preautonomic neurons comprise multiple different neuronal phenotypes 21 , of which oxytocin neurons that project to the rVLM account for~3% of the population 34 . In agreement with previous reports, we also identified that oxytocin neurons accounted for 1-2% of all retrogradely labeled parvocellular neurons, at least in sham animals. Remarkably, we noted that the proportion of neurons in which oxytocin protein could be detected using immunohistochemistry increased sixfold within 90 min of an MI. This ability to rapidly increase the number of oxytocin-expressing neurons is likely due to~80% of the parvocellular pre-autonomic neurons already expressing the oxytocin gene 21 , although under basal conditions, transcription of the oxytocin gene in the majority of neurons is presumably below the threshold to translate sufficient oxytocin protein to be detected by immunohistochemistry. Thus, many of the previously undetectable Remarkably, the increase in the number of pPVN oxytocin neurons appeared to be selective for pre-autonomic neurons projecting to the rVLM, rather than a ubiquitous increase throughout the pPVN. These results perhaps reflect a physiological adaptation for accommodating the increased neuronal traffic to the rVLM following acute MI. Importantly, the mechanism by which an MI triggers the intracellular pathways associated with neuronal plasticity is an area of research that warrants further investigation. To date, emerging evidence implicates pPVN oxytocin neurons as potential modulators of SNA 35,36 . Indeed, the direct microinjection of oxytocin into the rVLM appears to facilitate a sympathetic-mediated increase in mean arterial pressure and heart rate 37 . Hence, in this study it is reasonable to surmise that an~30% increase in pPVN pre-autonomic oxytocin neuronal activation is sufficient to elicit a 190% increase in cardiac SNA following acute MI, as reported in this study. As mentioned above, there are some pPVN neurons that project to the spinal cord 14,17,38 of which a sub-population is oxytocinergic 39 . Hence, we cannot rule out the possibility that some of these oxytocinergic projections to the spinal cord could be driving cardiac SNA post MI. Interestingly, studies have reported that, at least in chronic heart failure, there is a preferential increase in neuronal activity of pPVN neurons projecting to the rVLM, while pPVN neurons that project to the spinal cord remain largely unchanged 17 . Whether this is also true in the very early stages following acute MI remains unknown. In contrast to sympathetic modulation, hypothalamic oxytocin neurons have been implicated in the modulation of parasympathetic nerve activity, at least in chronic heart failure, facilitating a functional and structural cardioprotective effect 40 . The role that these cardiac vagal neurons play in modulating cardiac function in the very early stages following MI remains an intriguing area of future research. Patients who experience an acute MI are commonly prescribed a range of therapeutic interventions such as β-adrenergic receptor blockers, fibrinolytics, anti-platelet drugs, angiotensin-converting enzyme inhibitors, and analgesics as part of standard clinical practice 41,42 . Yet, considering the negative impact that sympathetic activation has on patient outcome 43 , it is somewhat perplexing that an effective sympathoinhibitory therapy has not yet emerged as part of current clinical practice 42,44 . Unfortunately, even coronary reperfusion therapy is limited in reversing or averting the early increase in cardiac SNA 5 . Considering that pPVN pre-autonomic oxytocin neurons are activated following acute MI, we subsequently used direct electrophysiological recordings of cardiac SNA in vivo to assess whether oxytocin receptor blockade, using retosiban, was effective at preventing sympathetic activation following acute MI. The advantages of using retosiban, compared to other oxytocin receptor antagonists are that; (i) as a non-peptide antagonist, retosiban can cross the blood-brain barrier so that an intravenous injection will access the brain 23 , which is clinically important for the immediate treatment for MI patients; (ii) retosiban is >18,000-fold more selective for oxytocin receptors than vasopressin (V1a) receptors 45 Currently, retosiban has entered phase III clinical trials for treatment of preterm labor, after the successful outcome of a phase II pilot dose-ranging study 47 . Perhaps one of the most pertinent observations from this study is that retosiban prevented sympathetic activation following acute MI, reduced arrhythmic incidence, and improved survival. Although the mechanisms that trigger arrhythmias are largely dictated inherently by the damage incurred by the injured myocardium, such as acidosis, hypoxia, and Ca 2+ and K + ionic imbalances, an adverse increase in SNA is recognized as a key extrinsic driver for arrhythmogenesis following acute MI 5,48,49 , as evident in our study. Since retosiban was administered intravenously, it was unclear as to whether retosiban was acting peripherally or centrally to suppress sympathetic activation. However, the observations that select central inhibition of oxytocin receptors (atosiban, i.c.v.) produced similar sympathoinhibitory effects as that seen when retosiban was administered peripherally supports the idea that retosiban is centrally suppressing SNA. Further studies are now essential to confirm the exact site, within the CNS, at which oxytocin receptor blocker is effective at suppressing SNA. Indeed, it is possible that retosiban's sympathoinhibitory effects are mediated through oxytocin signaling pathways other than the pPVN, as proposed in this study. Regardless of the mechanism or site of action, retosiban's potent sympathoinhibitory effects, its ability to cross the blood-brain barrier, and its exceptional safety profile, make it a strong candidate for a phase I clinical trial as an early treatment for MI patients. In conclusion, we have used immunohistochemistry and retrograde labeling to identify a unique sub-population of pPVN pre-autonomic oxytocin neurons that project to the rVLM and that are selectively activated in the very early stages following an acute MI. In vivo electrophysiological recordings of cardiac SNA reveals the critical importance of this sub-population of oxytocin neurons for mediating the increase in cardiac SNA and, in doing Methods Animals. Experiments were conducted on male Sprague Dawley rats (10 weeks old; body weight~250-350 g). All rats were on a 12 h light/dark cycle and provided with food and water ad libitum. All experiments were approved by the Animal Ethics Committee of the University of Otago, New Zealand, and conducted in accordance with the New Zealand Animal Welfare Act, 1999, and associated guidelines. The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Effect of atosiban (i.c.v.) and retosiban (i.v.) on arrhythmia and mortality following acute MI. a A typical 'Chart' recording showing an example of arrhythmic episodes within the first minute following LAD coronary occlusion (i.e., myocardial infarction (MI)), and the subsequent increase in cardiac SNA at 180 min post MI. The inset (red box) shows a close-up view of several impulse profiles. b The incidence of arrhythmic episodes (mean ± SEM) each hour for consecutive 3 h following MI in untreated MI rats (Untreated; n = 6), or rats treated with atosiban (MI + atosiban, 4.5 µg in 5 µl i.c.v. n = 6) or retosiban (MI + retosiban, 3 mg/kg, i.v. n = 8). Both atosiban and retosiban significantly reduced the incidence of arrhythmic episodes following acute MI, compared to saline-treated MI rats (*P < 0.05, **P < 0.01, ***P < 0.001, unpaired t-test). c Kaplan-Meier survival analysis showing a greater mortality rate in MI + saline rats (n = 12) compared to MI + retosiban rats (n = 9; P = 0.043) and MI + atosiban rats (n = 7; NS) within 3 h of the MI. None of the sham rats (n = 6) died during the experiment. SNA sympathetic nerve activity for 90 min at room temperature. Sections were incubated in Vectastain Elite ABC Solution (Vector Laboratories, Burlingame, USA) for 90 min at room temperature, followed by incubation in diaminobenzidine (DAB) substrate kit solution with nickel enhancement (Vector Lab, Burlingame, USA) to visualize c-Fos. Sections were examined at regular intervals under a bright-field microscope. When a visible black precipitate was observed against the background, the sections were washed in TBS to stop the reaction. Double label immunohistochemistry for c-Fos protein with either oxytocin or vasopressin was also performed using chromagen labeling with DAB without nickel enhancement. The c-Fos protein was first labelled as described earlier. Subsequently, the procedure was repeated to label either the oxytocin or vasopressin antigen, although, instead of using a biotinylated secondary antibody for the oxytocin labeling, a peroxidase-labeled secondary antibody was used. To specifically label for oxytocin protein, mouse monoclonal anti-oxytocin antibody (MAB5296, Millipore, 1:20000) and horseradish peroxidase horse anti-mouse IgG antibody (Vector Laboratories, PI-2000; 1:200) were used. To label for vasopressin protein, guinea pig monoclonal anti-arginine vasopressin antibody (Peninsula Laboratories, San Carlos, CA; 1:25,000) and biotinylated goat anti-guinea pig secondary antibody were used (Vector, BA7000; 1:200). Negative controls were run with omission of primary antibodies and showed no non-specific staining. Immunostained sections were mounted onto gelatin-coated slides and left to dry at room temperature. Slides were cover slipped with a mounting media containing distyrene, plasticiser, and xylene (DPX) (Sigma-Aldrich, USA) and left overnight to dry in a fume hood. Retrograde tracing from the rVLM. To characterize neuronal projections from the parvocellular PVN to the rVLM, fluorescent immunohistochemistry for oxytocin or vasopressin was performed in brain sections from rats that were injected with retrograde tracer into the rVLM. Standard aseptic surgical procedures were used for the injection of green fluorescent microsphere beads (Lumafluor Inc., Durham, USA), that emit fluorescence at 460 nm 51 , directly into the rVLM 1 week prior to MI induction. Rats were anesthetized with 1-5% isoflurane (1 l min −1 of O 2 ) placed in a small animal stereotaxic frame. The tip of a flame-pulled microinjection needle was positioned in the rVLM based on the coordinates of Paxinos and Watson (12 mm posterior to the bregma, 2.1 mm lateral to the midline, and 10 mm ventral to the skull surface) 50 and 100 nl of green fluorescent microspheres was injected over 5 min using a Nanoject II micromanipulator device (Drummond Scientific Company, cat. no. 3-000-205A, Philadelphia, USA). Only the brains from those rats that showed evidence of correct tracer positioning completely within the rVLM (Supplementary Figure. 1) were used for subsequent immunohistological staining for c-fos and oxytocin. Following recovery and post-operative care, rats were returned to their standard housing conditions where they remained and were monitored for 1 week. On day 7 post surgery, rats were subjected to the MI or sham protocol and their brains removed and sectioned, as described above. The protocol for fluorescent immunohistochemistry was similar to that described for DAB immunohistochemistry with the exception that (1) endogenous aldehyde activity was blocked by incubating the tissue sections in sodium borohydride (NaBH 4 , 0.1%) for 20 min at the beginning of the immunohistochemistry protocol, and (2) the secondary antibodies used to label oxytocin and vasopressin were fluorescent-tagged; Alexa Fluor 568 goat anti-mouse (A 11031, Molecular Probs, Oregon, USA; 1:500) and Alexa Fluor 568 goat antiguinea pig (Life Technologies, USA; 1:500), respectively. The sections were then stained for c-Fos protein using chromagen labeling with DAB as previously described. Following immunostaining, tissues were mounted, dried, and cover slipped with Vectashield mounting medium (Vector Laboratories Inc., Burlingame, CA, USA). Immunohistochemistry data analysis DAB immunohistochemistry data analysis. The c-Fos-positive, oxytocin-positive, and vasopressin-positive cell bodies were counted manually using Olympus AX51 bright-field microscope, with the experimenter blinded to the experimental groups. The cell bodies were counted on both sides of the pPVN in three sections and the mean number of labeled cells was calculated for each rat. Where double labeling was completed, the number of neurons with oxytocin or vasopressin that colocalized with c-Fos protein was also counted. Fluorescent immunohistochemistry data analysis. Oxytocin-positive and vasopressin-positive cell bodies were counted using Olympus AX51 epifluorescent microscope with FITC (for retrograde label) and Texas Red filters (for oxytocin and vasopressin) with the experimenter blinded to the experimental groups. Retrograde tracer injection into the RVLM predominantly labels the ipsilateral side of the PVN 16,18 and so quantification was carried out only on the ipsilateral side of the pPVN to the retrograde tracer injection site. The number of cells with either retrograde-label, oxytocin-positive, or vasopressin-positive cells were counted, as well as the number of cells that co-expressed retrograde label and either oxytocin or vasopressin. Electrophysiological recording of cardiac sympathetic nerve activity. Using urethane-anesthetized rats, a left thoracotomy was performed between the first and second ribs to expose and isolate the stellate ganglion. The cardiac sympathetic nerve was identified as a branch from the stellate ganglion, dissected free of surrounding connective tissue, sectioned, and the proximal section (containing efferent fibers) was placed on a pair of platinum recording electrodes 5,49 . The signal was filtered (low-cutoff 0.1 kHz; high-cutoff 1 kHz) and amplified 5 (BMA-200, AC/ DC Bioamplifier, USA) and subsequently passed through an amplitude discriminator (model WD-2, Dagan Corp., MN, USA) for counting nerve discharge frequency (impulse frequency). The femoral artery was cannulated for the continuous measurement of ABP. The interval between the arterial systolic peaks was used as an estimation of heart rate (eHR). One group of rats received an i.c.v. injection of atosiban. Using a stereotaxic frame, the tip of a 27-gauge stainless steel cannula was positioned in the right lateral cerebral ventricle based on the coordinates of Paxinos and Watson (0.8 mm There was a significant main effect of TIME (F (6, 90) = 10.96, P < 0.0001, two-way RM ANOVA), TREATMENT (F (2, 15) = 12.37, P = 0.0007, two-way RM ANOVA), and a significant TIME × TREATMENT interaction (F (12, 90) = 5.69, P < 0.0001, two-way RM ANOVA). *P < 0.05, ***P < 0.001 vs pre-MI (time zero), # P < 0.05, ## #P < 0.001 vs MI + retosiban and vs MI + atosiban, Bonferroni's post hoc test. All data are presented as mean ± SEM. b Representative transverse section of a heart slice stained with tetrazolium chloride (TTC) as a quantitative means of assessing of infarct size. The viable myocardium absorbs the TTC stain and forms a reddish pink pigment. In contrast, the infarcted myocardium remains unstained palewhite (encircled by blue dashed line) posterior to the bregma, 1.5 mm lateral to the midline, and 5.0 mm ventral to the skull surface). The distal end of the cannula was connected to a 10 µl Hamilton syringe for subsequent drug administration. Correct positioning of the i.c.v. catheter was confirmed after each experiment by staining with Evans blue dye (5 µl) 49 . Cardiac SNA, ABP, and eHR were continuously recorded for 20 min prior to occlusion of the LAD coronary artery, and for consecutive 3 h after LAD occlusion (MI), followed by an injection within 5 min of the infarct of either (a) saline i.v. (untreated MI, n = 6), (b) i.c.v.: atosiban (4.5 µg in 5 µl; Sigma CAS Number 90779-69-4; MI + atosiban; n = 6), or (c) i.v.: retosiban (3 mg kg −1 ; GSK 221-149 A, USA; MI + retosiban; n = 8). Sham animals did not undergo LAD occlusion, but did receive (a) an i.c.v. injection of atosiban (Sham + atosiban, n = 6) or (b) an i.v. injection of retosiban (Sham + retosiban, n = 6). Electrophysiology data analysis. Raw cardiac SNA and ABP were continuously sampled at 4 kHz and 400 Hz, respectively, using a PowerLab data-acquisition system (model 8/S, AD Instruments Ltd, New Zealand). The raw nerve signal was rectified and integrated (0.5 s resetting interval) online, and the integrated nerve signal was displayed in real time. The scale of variability in SNA within groups was reduced by omitting the highly-variable background 'noise' levels (i.e., zero nerve activity) from the recorded electroneurogram 5 . At the end of each experiment, the electroneurogram was continuously recorded as the rat was killed by an intra-cardiac injection of 1 M KCl, which elicited a maximum increase in cardiac SNA within the first 10 s of the heart stopping 5 . Subsequently, after the animal died, only background noise contributed to the overall recorded electroneurogram. During post-experiment analysis, this 'noise' was subtracted from the pre-recorded SNA. Moreover, we were able to verify that the maximal increase in SNA in response to KCl (>250 µV s −1 ) was markedly more than that in response to MI (~50-80 µV s −1 increase) or between groups of rats and, thus, a 'ceiling' effect was unlikely to confound the SNA results of this study. To further avoid potential variability between groups, SNA data were normalized by assessing the magnitude of response (% increase) to each manipulation 5 . Measurement of infarct size. At the completion of each electrophysiology experiment, the rat was killed and the heart excised and sectioned into 2 mm horizontal slices down the vertical plane. The sections were then stained with 2,3,5triphenyltetrazolium chloride (TTC) (Sigma-Aldrich, Inc., MO, USA) and subsequently fixed in 10% formalin for 20 min. Slices were mounted and photographed. Total infarct size was determined by measuring the area of the infarction for each slice, multiplying the area by the slice thickness, and summing the area of all slices. Infarct size was presented as a percentage of the total left ventricular wall 49 . Statistical analysis. Statistical analyses were performed using Prism (v6.0; GraphPad Software Inc.). All results are presented as mean ± SEM. Immunohistochemical experiments were analyzed using unpaired t-tests. In electrophysiology experiments, two-way analysis of variance (ANOVA; repeated measures) was used to test significance for (i) temporal changes in recorded variables following LAD occlusion and (ii) differences between MI + saline and MI + retosiban (i.v) or MI + atosiban (i.c.v) groups; where the F-ratio was significant, Bonferroni's post hoc tests were completed. The Kaplan-Meier survival analysis was performed to compare survival curves between the different groups of MI rats. A P value ≤ 0.05 was predetermined as the level of significance for all statistical analyses. Data availability The data that support the findings of this study are available within the article and supplementary files, or available from the corresponding authors upon request.
2018-11-01T14:27:16.617Z
2018-10-04T00:00:00.000
{ "year": 2018, "sha1": "3d8286334701105eaa93ff6909b604a1895ed978", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s42003-018-0169-5.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3d8286334701105eaa93ff6909b604a1895ed978", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7786255
pes2o/s2orc
v3-fos-license
Dull to Social Acceptance Rather than Sensitivity to Social Ostracism in Interpersonal Interaction for Depression: Behavioral and Electrophysiological Evidence from Cyberball Tasks Objectives: Impairments in interpersonal relationships in depression present as irritability, pessimism, and withdrawal, and play an important role in the onset and maintenance of the disorder. However, we know little about the neurological causes of this impaired interpersonal function. This study used the event-related brain potential (ERP) version of the Cyberball paradigm to investigate the emotions and neural activities in depressive patients during social inclusion and exclusion simultaneously to explore neuropsychological mechanisms. Methods: Electrophysiological data were recorded when 27 depressed patients and 23 healthy controls (HCs) performed a virtual ball tossing game (Cyberball) during which the participants believed they were playing with two other co-players over the internet. The Cyberball paradigm included two other conditions; inclusion during which participants received the ball with the same probability as the other players to experience a feeling of acceptance, and exclusion during which the participants experienced a feeling of ostracism when the other two players threw the ball with each other. The Positive and Negative Affect Schedule (PANAS) was used as a baseline and after each block during the Cyberball to assess positive and negative effects. In addition, a brief Need-Threat Scale (NTS) was used to assess the fulfillment of basic needs of subjects after each block and 10 min after ostracism. Moreover, the relationship between the ERP data of depression and clinical symptoms was analyzed. Results: Exclusion compared to inclusion Cyberball caused a decrease in positive affect and an increase in negative affect. The group differences were only found in the positive affect. Moreover, patients reported a lower level of basic needs than did HCs after social inclusion, but a similar level of basic needs after social exclusion. At the electrophysiological level, patients showed decreased P3 amplitudes compared to HCs in social inclusion, and P3 amplitudes were borderline negatively correlated with their scores of anhedonia symptoms. Limitations: A limitation of our study was that the subjects' criteria were different. Conclusions: The behavioral and electrophysiological results indicated that the interpersonal problems in depressive patients were mainly due to deficits in processing the pleasurable social stimuli rather than aversive social cues. Objectives: Impairments in interpersonal relationships in depression present as irritability, pessimism, and withdrawal, and play an important role in the onset and maintenance of the disorder. However, we know little about the neurological causes of this impaired interpersonal function. This study used the event-related brain potential (ERP) version of the Cyberball paradigm to investigate the emotions and neural activities in depressive patients during social inclusion and exclusion simultaneously to explore neuropsychological mechanisms. Methods: Electrophysiological data were recorded when 27 depressed patients and 23 healthy controls (HCs) performed a virtual ball tossing game (Cyberball) during which the participants believed they were playing with two other co-players over the internet. The Cyberball paradigm included two other conditions; inclusion during which participants received the ball with the same probability as the other players to experience a feeling of acceptance, and exclusion during which the participants experienced a feeling of ostracism when the other two players threw the ball with each other. The Positive and Negative Affect Schedule (PANAS) was used as a baseline and after each block during the Cyberball to assess positive and negative effects. In addition, a brief Need-Threat Scale (NTS) was used to assess the fulfillment of basic needs of subjects after each block and 10 min after ostracism. Moreover, the relationship between the ERP data of depression and clinical symptoms was analyzed. Results: Exclusion compared to inclusion Cyberball caused a decrease in positive affect and an increase in negative affect. The group differences were only found in the positive affect. Moreover, patients reported a lower level of basic needs than did HCs after social inclusion, but a similar level of basic needs after social exclusion. At the electrophysiological level, patients showed decreased P3 amplitudes compared to HCs in social inclusion, and P3 amplitudes were borderline negatively correlated with their scores of anhedonia symptoms. INTRODUCTION Depression is a very common disease with high morbidity, disability, and recurrence rate (Luppa et al., 2007). It was the second biggest contributor to the disease burden in China between 1990 and 2010 (Yang et al., 2013). Part of the burden relates to the impairment in the quality of life and relationships for patients with depressive disorder (Mehta et al., 2014). The interpersonal relationships of depressed patients are presented as irritable, pessimistic, and withdrawn, which may be persistent or may recover more slowly than symptom changes (Southwick et al., 2005). These interpersonal problems that may be due to negative cognitive deviation, anhedonia, and emotional regulation deficits have important roles in the onset and maintenance of the depressive state. In addition, many effective short-term treatment strategies target the improvement of interpersonal problems, such as interpersonal psychotherapy (IPT) (Bruijniks et al., 2015). However, the neuropsychological mechanisms of this impaired interpersonal function in depression are poorly understood. In daily life, ostracism, social exclusion, and rejection are common aversive phenomena during interpersonal interactions. Recently, studies have shown that social exclusion experiences are related to high accident rates, suicide, homicide, and increasing prevalence of affective disorder and personality disorder (Leary et al., 2003;Williams, 2007;Munjiza et al., 2014). Depressive patients may be sensitive to social exclusion. Jobst et al. found that social exclusion experiences elicited pronounced negative emotions and lower oxytocin levels in patients with chronic depression compared to healthy control subjects, suggesting that depression causes difficulty in coping adequately with aversive social cues (Jobst et al., 2015). Chronic exposure to neglect and rejection showed a strong association with the onset of depression (Slavich et al., 2010;Mandelli et al., 2015). Masten et al. conducted a neuroimaging study involving 20, 13-year-old adolescents who were included and excluded by peers during the Cyberball paradigm, and depressive symptoms were assessed via parental reports at the time of the scan and 1 year later. The results showed exclusion invoked greater sub anterior cingulate cortex (subACC) activity than inclusion, and that this activity was associated with increases in parent-reported depressive symptoms 1 year later (Masten et al., 2011). Thus, depression is closely interrelated with social exclusion. In addition, it is noteworthy that depressive disorder was also found to be associated with impairments in generating positive emotions or motivations, which was named anhedonia (Watson and Clark, 1995;Sloan et al., 2001;Dichter et al., 2004;Joormann and Gotlib, 2006;Watson and Naragon-Gainey, 2010). Researchers conducted numerous studies using different paradigms to explore the anhedonia in depression and proposed that anhedonia in depression involved the impairment of motivation, reinforcement learning, and reward-based decision making, rather than the experience of pleasure per se (Pizzagalli, 2014). Studies showed that patients with depression had a response bias against happy expressions (Surguladze et al., 2004), and reported blunted affective responses to positive but not negative cues (Sloan et al., 2001;Rottenberg et al., 2002;Dichter et al., 2004). Moreover, studies suggested that reduced positive self-image, but not increased negative self-image predicted depressive symptoms 9 months later (Dobson and Shaw, 1987;Johnson et al., 2007). Using monetary reward tasks, Knutson et al. reported that patients showed significantly reduced reward responsiveness (Knutson et al., 2008). Another study reported that outpatient individuals who had high depressive symptoms showed lower reaction bias to high risk reward stimulation, which could predict their severity of future depression (Pizzagalli et al., 2005). Therefore, anhedonia was recognized as a key trait related to vulnerability of depression. Research on anhedonia represents a focus shift from the aspects of depression related to negative affect, to the aspects of depression related to positive affect (Forbes, 2009). Thus, relative to social exclusion, social inclusion, which means the perception of positive involvement in interpersonal interactions, is of even greater concern (Parr et al., 2004). However, previous research on anhedonia in depression mostly focused on pleasant images, words, or money as positive stimuli (Wang et al., 2006;Bylsma et al., 2008;Knutson et al., 2008;Foti and Hajcak, 2009). Few studies have addressed the processing of positive social stimuli which could be recognized as social reward stimuli, as critically important types of rewards in depression (Forbes, 2009). Therefore, studies on dysfunctional neural responses to social inclusion in depression are of great significance. The aim of the present study was simultaneously focused on social ostracism and acceptance, using the Cyberball paradigm to address the behavioral and the neural activity during interpersonal interactions in patients with depressive disorders. The Cyberball paradigm is a computer-based virtual ball-tossing game (Williams et al., 2000). It is widely used for reliably inducing feelings of interpersonal ostracism and acceptance in the laboratory environment (Eisenberger et al., 2003;Sebastian et al., 2010;Bolling et al., 2011;Maurage et al., 2012;Mooren and van Minnen, 2014). In the ball-tossing game, there are two social situations, one is mental acceptance, and the other is mental ostracism. In mental acceptance, participants are included in the game and the rate of receiving the ball is the same as the other player. However, in mental ostracism, the player is excluded from the game and has no chance to receive the ball from the other player. To measure the immediate effects of the game, participants are asked to report how they felt regarding mood and four basic needs, including belonging, selfesteem, control, and meaningful existence (Jamieson et al., 2010). Furthermore, social exclusion induced an automatic emotion regulation process (DeWall et al., 2011). The four primary needs damaged during ostracism were recovered after a 45 min delay. This delay of effects of social exclusion represented the selfregulatory ability. Participants with high social anxiety reported a prolonged recovery compared to those with low social anxiety (Zadro et al., 2006). Patients with depressive disorders were dysfunctional in the automatic regulation of emotion (Kupfer et al., 2012). Therefore, we suggest that the reflexive and painful response to ostracism of patients with depression may be prolonged. Event-related brain potentials (ERPs), known for their optimal temporal resolution on the millisecond scale, can provide information regarding the discriminative ability of the brain and neurocognitive processing related to shifting attention (Singh and Telles, 2015). ERPs can monitor the neural processes engaged in disrupted cognitive function and can identify specific neurocognitive deficiencies in mental patients (Campanella, 2013;Delle-Vigne et al., 2014). Therefore, it may be helpful for psychiatrists to better understand the pathophysiological mechanisms involved in diverse mental diseases and then develop a follow-up rehabilitation plan specific for each patient's deficit. Moreover, ERPs also can assess the possible benefits of training of the impaired cognitive functions by comparing them to absolute medicine therapy (Campanella and Maurage, 2016). Therefore, ERPs can be useful for clinicians to install a best suited individualized treatment. Thus, electrophysiological data is recorded when participants perform the Cyberball game to investigate the dynamic and ongoing neural processes associated with social interactions (Themanson et al., 2013(Themanson et al., , 2015. Prior reports reported that N2, P3b, and frontal slow wave in response to each exclusionary cue were closely related to the detection, appraisal, and regulation processes of social exclusion, respectively (Crowley et al., 2009;Themanson et al., 2013). Moreover, the P3b amplitude for the exclusionary cue, not the N2 component, was significantly correlated with a self-reported affect. Themanson therefore reported that the perception of being excluded may be more closely related to self-reported feelings in response to social exclusion (Themanson et al., 2013). Niedeggen reported that P3 in response to an inclusionary cue was related to the subjective expectancy of social involvement (Niedeggen et al., 2014). The current study should therefore observe the P3 component to investigate the neural basis of social involvement in depression. Based on the above, we propose that: (i) patients with depression are hypersensitive to social exclusion during behavioral and neural activity; (ii) depression should show a blunt response to a social acceptance signal; and (iii) decreased need fulfillment induced by social exclusion should last longer in depressive patients compared to healthy control subjects. Participants Thirty-one outpatients with depression and 25 healthy control (HC) participants were included in the study. Four patients and two controls were excluded, because of transpiration artifacts in the ERPs or due to quitting the study prematurely. Thus, the final participants consisted of 27 depressive patients (18 female) and 23 HCs (17 female). The two groups were matched by age and years of education (see Table 1 for demographic characteristics). The patients were recruited at the Mental Health Center of Anhui Province of China and were diagnosed, by two senior psychiatrists, with depressive episodes without psychotic symptoms according to the DSM IV-TR. General exclusion criteria for participants included fewer than 6 years of education, younger than 18 years, older than 45 years, a history of organic brain disease or neurological disorders (e.g., dementia, epilepsy, or history of brain injury), or current or past substance abuse or dependence. Patients were also excluded if they suffered from any other psychiatric disorder except depression. The HCs were recruited from the website, and excluded if they had a current or past psychiatric disorder, or ever received psychiatric treatment. Within the depressive patients group, seven were medicated with a selective serotonin reuptake inhibitor (SSRI), four with a selective serotonin and noradrenaline reuptake inhibitor (SSNRI), one with tricyclic antidepressants (TCAs), and 12 patients were drug free. The study was approved by the local ethics board. All participants provided written informed consent. The healthy controls received 100 RMB for experiment compensation. Every participant completed a simple demographics questionnaire, including general information, psychiatric treatment history, and substance abuse or dependence history. The severity of current depressive symptoms was assessed with the Beck Depression Inventory 13 (BDI-13) which had satisfactory reliability and validity (Beck et al., 1961). The self-report measures consisted of 14 items based on the experience of the past 2 weeks, including the start day. The fourth item (life satisfactions), the eighth item (socializing), and the thirteenth item (appetite) were considered as anhedonia symptoms. Cyberball Manipulation Participants were told that they would be playing an online game of "catch the ball" with two other players who connected over the internet, and that it did not matter who threw or caught, but rather that they used the animated ball toss game to assist them in visualizing the other players, the setting, and the temperature. Unknown to the participants, the two other players in the Cyberball game were computer-generated players controlled by a computer program. During the Cyberball game, the participant's neuroelectrical activity was recorded. The set of our Cyberball paradigm, including the number of throws, the time course, and the event-related markers, used the Themanson JR's ERP version of Cyberball (Themanson et al., 2013), but only two blocks (inclusion and exclusion) were administered (Figure 1). The detailed sets were that each block concluded after 80 trials, and the participant had a 50% chance of receiving the ball at each throw in the inclusion block, resulting in each participant getting ∼33% of the throws. When the subject received the ball, he/she could press the F key if they wanted to throw the ball to the player on their left and the J key if right. Every trial lasted 2.5 s, including a 1.5 s period of ball movement, and 0.5 s before and after throwing the ball. For the two computerized players, random intervals between 0.5 and 3 s were set to create a sense that they were making a choice about throwing to a player. In the exclusion block, participants could not catch the ball after he/she received 10 throws, resulting in almost 50 exclusionary throws. The event markers were inserted at the time when the computerized players decided to throw the ball, and then the inclusion events were those throws from the computerized player to a participant during the inclusion block. The exclusion events were those between two computerized players during the exclusion blocks. Positive and Negative Affect Scales The Chinese version of the Positive and Negative Affect Schedule (PANAS) was also used. It was administered before the Cyberball task to assess the baseline status, and after social acceptance and exclusion tasks during the experiment. Assessment of Basic Needs The brief Need-Threat Scale (NTS) used in the Cyberball paradigm has been previously reported (Williams et al., 2000;Zadro et al., 2004), and has been translated from English to Chinese and back-translated from Chinese to English until an agreement was found, including both the instructions and questionnaires. This back translation methodology was used by Brislin, and was a useful method to translate international questionnaires (Brislin, 1970). Participants were asked to complete the NTS after each Cyberball block according to how they felt while playing the game, whereas completing the NTS 10 min after the exclusion Cyberball block described how they felt "right now." Manipulation Checks Manipulation checks were conducted to confirm the participant's exclusionary perception. Patients or participants were asked to estimate the percent of throws they had received and rate how much they felt excluded while playing the Cyberball game on a five point Likert Scale ranging from 1 (very included) to 5 (very excluded). Event-Related Potential Recording Electroencephalography (EEG) was recorded from 64 scalp sites using Ag-AgCl electrodes mounted on an elastic cap (Neuro Scan, Sterling, VA, USA) according to the international 10/20 system, with the left mastoid as reference and averaging of the left and right mastoids offline as a re-reference, as well as a forehead ground. Vertical and horizontal bipolar electrooculography (EOG) activity was recorded to monitor eye movements. EEG and EOG activity were continuously digitized (500 Hz sampling rate) and low-pass filtered (30 Hz; 24 dB/octave). All electrode impedances were maintained below 10 K . Offline processing of the stimulus-locked ERP included ocular artifact removal using a regression procedure implemented in the Neuroscan software (Semlitsch et al., 1986), 1200 ms stimulus-locked epochs (a 200 ms pre-stimulus baseline), and artifact rejection (epochs with signals that exceeded ± 100 µV were excluded from averaging). Statistical Analysis Behavioral measures were statistically evaluated using SPSS, version 20 (IBM, Armonk NY, USA). Analysis of variance was performed using repeated measures analyses of variance (ANOVAs), two-tailed independent samples t-tests used Bonferroni correction, and Pearson's correlation analyses. An experiment-wise alpha level of P < 0.05 was set for all analyses. After visual inspection of the grand average wave forms (Figure 4), we detected a broad positive wave, which peaked around 420 ms after stimulus onset and clearly differentiated between the two groups and stimulus categories and electrodes. Therefore, we computed the average amplitude in the discrete latency window running from 370 to 470 ms after the event marker at electrode points of FZ, FCZ, and CZ. Behavioral Measures Statistical analyses on scores of PANAS showed the expected blocking effects on both positive affect (PA) and negative affect (Figure 2). These findings suggest that social exclusion resulted in a significant decrease in positive affect and increase in negative affect for both patients and HCs. The positive moods of patients were lower than the HCs after measurements following social inclusion and exclusion, respectively; the negative mood of the patients showed no significant difference from the HCs. The repeated ANOVA of 3 (block: Inclusion, exclusion, and 10 min after exclusion) × 2 (group: Patients and HCs) analysis on scales of NTS also revealed the expected block effects for all the four fundamental human needs (belonging, self-esteem, control, and meaningful existence): F (2, 48) = 111.659, P < 0.001; F (2, 48) = 70.214, P < 0.001; F (2, 48) = 57.978, P < 0.001; F (2, 48) = 61.034, P < 0.001, and block group interaction effects for self-esteem and control needs were also significant [F (2, 48) = 3.407, P < 0.05; F (2, 48) = 5.446, P < 0.05]. The group main effects for the four basic needs were not significant (P-values > 0.1). Follow-up analyses showed that patients were more threatened on self-esteem and control needs during social inclusion [t (49) = -2.193, P < 0.05; t (49) = −1.916, P = 0.061], but were not different from HCs during social exclusion. Further simple analyses showed a significant decrease of scores on selfesteem and control needs between the inclusion and exclusion block and an increase between the exclusion block and 10 min after exclusion for HCs (P-values < 0.01). For patients, there were no significant differences for self-esteem and control needs between the exclusion block and 10 min after exclusion (P-values > 0.1) (Figure 3). The findings suggested social exclusion could induce a significant decrease in all needs fulfillment for both groups. After 10 min, these effects were restored in HCs, but not in patients with depression, especially on the subscales of the self-esteem and control needing parameters. Correlation Analysis We determined that the average amplitudes of P3 evoked by inclusion were not significant and negatively correlated with BDI scales (r = -0.234, P > 0.05), but were borderline significant and negatively correlated with the scores of anhedonia items (r = -0.331, P = 0.092), which suggested that with higher scores of anhedonia items, P3 amplitudes would be lower. DISCUSSION The main findings of this study were that a significant difference exists both for subjective reports and electrophysiological activity in encoding pleasurable social stimulus rather than aversive social cues between depressive patients and HCs. Exclusion Cyberball compared to inclusion Cyberball caused a decrease in positive affect and an increase in negative affect and also decreased basic need satisfaction for both groups. When examining the group effect, the scores of negative subscales of PANAS did not differ between the two groups. However, the scores of positive subscales were both lower after playing inclusion and exclusion Cyberball in patients relative to HCs. In addition, the self-esteem and control needs in patients were lower in patients than for HCs after social acceptance, but not after social ostracism. More importantly, in electrophysiological levels, patients showed decreased P3 amplitudes compared to HCs in social acceptance conditions rather than in social exclusion conditions. P3 amplitudes evoked by acceptance were borderline negatively correlated with anhedonia scores in the patient group. A mechanism for understanding the impairment of social function of depression is discussed below. This study found that the scores of patients with depression, for negative affect and primary needs measured after social ostracism block, were the same as those of HCs, suggesting that patients on the behavioral level were not more sensitive to social ostracism than HCs. There was also no significant difference in the P3 component response to social exclusionary events between patients and HCs. Because the detailed settings of the social exclusion block of our Cyberball paradigm were imitations of the Themanson JR's ERP version Cyberball, the P3 component, activated by exclusionary events, mainly represented the explicit awareness or perception of being excluded and the related allocation of attention to the exclusionary experience (Themanson et al., 2013). This result was inconsistent with our first hypothesis. The heterogeneity of patients may be one confounding factor affecting these results. More importantly, the relationship between social ostracism and depression could involve a model in which social ostracism events activated brain regions involved in negative effects, eliciting negative self-cognition, and released proinflammatory cytokines, with increased risk of depression (Slavich et al., 2010). We therefore suggest that the sensitivity to social ostracism depended more on stressful life events, which was consistent with a previous report (Iffland et al., 2014). The main behavioral group effect of this study was found primarily in social acceptance, where patients scored lower on positive affect and basic needs compared to HCs. This indicated that patients with depression showed a dull response to positive cues. Our finding was consistent with a previous study (Sloan et al., 2001). This study reported that depressed women showed a reduced frequency and intensity only to pleasant stimuli, and the recall for only pleasant words was different with non-depressed women. More importantly, the electrophysiological results indicated that patients showed decreased P3 amplitudes in response to social acceptance, when compared to HCs. Studies on inclusion and overinclusion demonstrated overinclusion increased the satisfaction of primary needs and indicated that P3 amplitude was a signal of modulation of the subjective expectancy of involvement (Niedeggen et al., 2014). Our results therefore indicated that depressive patients expressed lower expectancy on social involvement during social acceptance. It was further noted that the P3 amplitudes evoked by inclusion were borderline negatively correlated with the severity of anhedonia symptoms. Because anhedonia was recognized as the typical clinical symptom of depression, the P3 amplitude could be a prognostic index of anhedonia symptoms in depression. Furthermore, the electrophysiological method may be a more sensitive technique for predicting the prognosis of the lesion compared to the behavioral results, which did not correlate with depressive symptoms. Our studies also found that exclusion led to detrimental effects on mood and the four measured human needs (belonging, control, self-esteem, and meaningful existence) in both patient and HCs groups, which were in agreement with previous studies (Zadro et al., 2006;Williams, 2007;Jamieson et al., 2010;Onoda et al., 2010;Domsalla et al., 2014). This indicated that social exclusion, which has been conceptualized as a significant threat to survival (Baumeister and Leary, 1995;Macdonald and Leary, 2005), could have a negative impact on psychological processes. Moreover, it is worth noting that in this study, patients with depression showed more prolonged negative effects of ostracism than did HCs after the social exclusion task. This persistence of the effect of exclusion, which was hypothesized to be a symbol of emotional dysregulation (Kashdan et al., 2006), was also found in individuals experiencing psychological difficulties, such as social anxiety and schizophrenia (Zadro et al., 2006;Perry et al., 2011). This can be explained by the fact that individuals with psychological vulnerability are more likely to obsess about negative social encounters. Davidson's view on plasticity in the neural circuitry of emotion could also provide an explanation. Mental disorders, especially mood disorders, are featured as expressing normal emotion in inappropriate contexts (e.g., expression of the negative effect of social ostracism in the context when mood has been appeased) (Davidson et al., 2000). The current study further suggested that future research on ostracism in patients with mental disorders requires assessing the effects of ostracism across time, rather than only focusing on immediate reactions. This study had theoretical and clinical value for the involvement of anhedonia in depression. The results showed that depressive patients have deficits in processing pleasure rather than aversive social cues, indicating that anhedonia in depression could also be demonstrated in processing social reward stimuli. This supported the possibility that anhedonia is one of the most promising diagnostic endophenotypes of depression (Pizzagalli, 2014). Moreover, most of the previous studies measured anhedonia usually using face perception or emotional words as emotion-induced stimuli (Surguladze et al., 2004;Joormann and Gotlib, 2006;Wang et al., 2006). The present study further tested the anhedonia theory in the social interaction environment, which is more critical to human functioning. On a clinical level, psychotherapy on interpersonal dysfunction has been suggested to involve much more attention on the patient's lower expectance of involvement to social acceptance, to encourage patients to engage in rewarding social activities to moderate depression. There are some limitations to our study. One is that the subjects' criteria were different. The subjects in the present study consisted of some chronic depression patients most of whom used medications, and some drug-free first episode patients. Thus, the drugs could have influenced anhedonia in social interactions. Another limitation is the poor spatial resolution of ERPs, which could not accurately discriminate activation of brain areas during social interactions. Future studies using other brain imaging methods, such as functional magnetic resonance imaging with high spatial resolution, are needed to substantiate and extend our findings. CONCLUSIONS The present results demonstrated that when considering social acceptance and ostracism conditions simultaneously, patients with depression experienced lower positive effects and basic needs than did healthy control subjects mainly during social acceptance. The P3 amplitudes were significantly smaller in patients than in controls during social inclusion. In addition, the P3 amplitudes in patients, evoked by inclusion, were borderline negatively correlated with the severity of anhedonia symptoms. These findings indicate that the interpersonal dysfunctions in depressive patients are mainly due to anhedonia to socially rewarding stimuli rather than being sensitive to social rejection. The current study also provides behavioral and electrophysiological evidence that anhedonia is the endophenotype of depression. ETHICS STATEMENT The ethics board of Anhui Medical University. All subjects voluntarily joined this study with informed consents. Minors, persons with disabilities and endangered animal species were not involved in this study.
2017-05-17T19:39:15.545Z
2017-03-31T00:00:00.000
{ "year": 2017, "sha1": "9bb41c06470f7026c382ca46cb97777794d84ebc", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2017.00162/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9bb41c06470f7026c382ca46cb97777794d84ebc", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
53358672
pes2o/s2orc
v3-fos-license
Anomalous Proximity Effect and Theoretical Design for its Realization We discuss the stability of zero-energy states appearing in a dirty normal metal attached to a superconducting thin film with Dresselhaus [110] spin-orbit coupling under the in-plane Zeeman field. The Dresselhaus superconductor preserves an additional chiral symmetry and traps more than one zero-energy state at its edges. All the zero-energy states at an edge belong to the same chirality in large Zeeman field due to the effective $p$-wave pairing symmetry. The pure chiral nature in the wave function enables the penetration of the zero-energy states into the dirty normal metal with keeping their high degree of degeneracy. By applying a theorem, we prove the the perfect Andreev reflection into the dirty normal metal at the zero-energy. This paper gives a microscopic understanding of the anomalous proximity effect. I. INTRODUCTION The proximity effect has been an important issue in physics of superconductivity. In a normal metal attached to a metallic superconductor, penetrating Cooper pairs form the gap structure in the quasiparticle density of states (DOS) at the fermi level (zero-energy) and modify the low energy properties there. In the spintriplet superconductor junctions, however, the penetrating Cooper pairs form a zero-energy peak in DOS 1-3 . This brings various anomalous electromagnetic properties in the normal metal [4][5][6] . The such effect is called anomalous proximity effect. For instance, the perfect Andreev reflection from a p x -wave superconductor into a dirty normal metal causes anomalous low energy transport in the x-direction such as the zero-bias conductance quantization in normal-metal/superconductor (NS) junctions 4 and the fractional current-phase relationship in superconductor/normal-metal/superconductor (SNS) junctions 3 . Recently, these characteristic transport phenomena have been investigated as a part of Majorana physics 7,8 based on the topological classification 9 . In fact, using dimensional reduction, i.e., by fixing the wave vector in the transverse direction (say k y ), the spintriplet p x -wave superconductivity is topologically characterize by one-dimensional winging number 10 . The number of the zero-energy states (ZESs) at an edge is equal to the number of propagating channel N c . As a consequence, the dispersion of edge states becomes flat as a function of k y . The anomalous proximity effect is originated from the penetration of the ZESs into the dirty normal metal with keeping their high degree of degeneracy 1,3,5 . Theoretically, it has been unclear what symmetry protects the high degeneracy of ZESs and why the perfect Andreev reflection persists at the zero-energy. Although it is difficult to fabricate spin-triplet superconducting junctions using existing materials, the rapid progress in Majorana physics on artificial superconduc-tors [11][12][13][14][15][16][17] and that in spintronics for controlling the spinorbit interaction 18,19 may diffuse the situation. To have topologically nontrivial artificial superconductors, a set of three potentials is necessary: the spin-orbit coupling, the Zeeman field and the pair potential. Among them, the spin-orbit interaction mainly affects the spectra of the edge states. In InSb or GaAs, for example, the Dresselhaus [110] spin-orbit interactions 20 is large on their films growing along the [110] direction. Theoretical studies 21,22 have shown that such artificial superconductor hosts the ZESs with flat dispersion similar to those of the p x -wave superconductor. We also confirm that a proximitzed spin helix thin film 18,19 also traps the flat ZESs under appropriate tuning of Zeeman field. The Dresselhaus superconductors may be classified into BDI symmetry class in a sense that the chiral symmetry and the particle-hole one can be defined independently. Recent theoretical studies [23][24][25] have shown that the chiral symmetry is responsible for the stability of more than one Majorana fermion at the edge of the BDI superconductor. On the basis of the novel insight, we solve an outstanding problem of the anomalous proximity effect. In this paper, we first demonstrate that the Dresselhaus superconductors indicate the anomalous proximity effect in large Zeeman field. After showing the unitary equivalence between the Hamiltonian of the Dresselhaus superconductor and that of spin-triplet p x -wave one, we analyze the chiral property of ZESs both at the edge of the superconductor and at the normal metal attached to it. The analysis shows that all the ZESs in the normal metal belong to the same chirality due to the p x -wave pairing symmetry. We prove that the pure chiral nature of the wave function protects the high degeneracy at the zero-energy and causes the perfect Andreev reflection into the dirty normal metal. This paper provides a microscopic understanding of the anomalous proximity effect and a design of an artificial p x -wave superconductor. II. ANOMALOUS PROXIMITY EFFECT At first, we numerically demonstrate the anomalous proximity effect of the Dresselhaus superconductor. Let us consider a NS junction on the two-dimensional tightbinding model as shown in Fig. 1. A lattice site is pointed by a vector r = jx + my, where x and y are the unit vectors in the x and the y directions, respectively. In the y direction, the number of the lattice site is M and the hard-wall boundary condition is applied. The present junction consists of three segments: an ideal lead wire (∞ ≤ j ≤ 0), a normal disordered segment (1 ≤ j ≤ L) and a superconducting segment (L + 1 ≤ j ≤ ∞). The Hamiltonian reads, where c † r,σ (c r,σ ) is the creation (annihilation) operator of an electron at the site r with spin σ = (↑ or ↓), t denotes the hopping integral among the nearest neighbor sites denoted by r, r ′ , µ is the chemical potential, and λ D represents the strength of the Dresselhaus [110] spin-orbit interaction. We consider the impurity potential given randomly in the range of −W/2 ≤ V imp (r) ≤ W/2 in the normal segment and the s-wave pair potential ∆ 0 in the superconducting segment. The Pauli's matrices in spin space are represented byσ j for j = 1 − 3 and the unit matrix in spin space isσ 0 . By tuning the magnetic field B in the x direction, it is possible to introduce the external Zeeman potential V ex . We calculate the differ- ential conductance G NS of the NS junctions based on a formula 26 where r ee ζ,η and r he ζ,η denote the normal and Andreev reflection coefficients at the energy E, respectively. The indices ζ and η label the outgoing channel and the incoming one, respectively. These reflection coefficients are calculated by using the lattice Green's function method 27,28 . In Fig. 2, we present the differential conductance of the Dresselhaus superconductors as a function of the bias voltage for several choices of the length of the disordered segments L, where we choose parameters as µ = 1.0t, λ D = 0.2t, W = 2.0t, M = 10 and ∆ 0 = 0.1t. The results are the normalized to G Q = 2e 2 /h. In Fig. 2(a), we choose V ex = 1.2t leading to the number of propagating channel N c = 5. The differential conductance decreases with increasing L for the finite bias voltage. However, the zero-bias conductance is quantized at G Q N c irrespective of L. We have confirmed that the zero-bias conductance is always quantized at G Q N c even when we change the wire width M . The results suggest that the perfect transmission channels exist in the disordered normal segment 4 and their number is equal to N c . The conductance quantization at the zero-bias is an aspect of the anomalous proximity effect. We have also confirmed the fractional current-phase relationship in SNS junctions 3,29 . Such anomalous behavior can be seen when the Zeeman field is larger than a critical value V ex > V c with V c = 0.92t at the present parameter choice. For V ex < V c , on the other hand, the conductance quantization in absent as shown in Fig. 2 The differential conductance is plotted as a function of the bias voltage for several choices of the length of disordered segment L. In (a), the Zeeman potential Vex = 1.2t is chosen to be larger than a critical value of Vc = 0.92t. The number of propagating channels Nc is 5. In (b), we choose Vex = 0.5t < Vc leading to Nc = 6. A. Chiral Symmetry In what follows, we consider the Dresselhaus superconductor in the continuous space for simplicity. The BdG Hamiltonian is represented by where ξ r = − 2 2m ∇ 2 − µ, and m denotes the effective mass of an electron. The pair potential ∆ 0 and the impurity potential V imp are introduced in the superconducting segment and in the normal one, respectively. We assume large enough Zeeman potential so that α D ≡ λ D k F /V ex ≪ 1 is satisfied with k F = √ 2mµ/ . By applying the unitary transformations as shown in Appendix B, H 0 is transformed intoȞ 1 =Ȟ P +V ∆ within the first order of α D , wherě and s s = 1 (−1) for σ =↑ (σ =↓). The Hamiltonian H σ are equivalent to that of the spin-triplet p x -wave superconductor andV ∆ mixes the two spin sectors. In the anomalous phase V ex > V c , all the spin-↑ states pinch off from the Fermi level and only the spin-↓ states remain at the Fermi level. Therefore the spin-mixing termV ∆ does not affect the remaining spin-↓ states at all. In this way, we can shrink down the 4 × 4 HamiltonianȞ 1 to the 2 × 2 HamiltonianĤ ↓ . The 2 × 2 BdG Hamiltonian preserves a chiral symmetrŷ withτ j for j = 1 − 3 are the Pauli matrices in Nambu space. Here we summarize two important features of the eigen states ofĤ ↓ proved in Ref. 10. (See also Appendix A for details.) (i) The eigen states ofĤ ↓ at the zero-energy are the eigen states ofτ 1 at the same time. Namely, the eigen vectors at the zero energy ϕ ν0,λ (r) satisfieŝ H ↓ ϕ ν0,λ (r) = 0,τ 1 ϕ ν0,λ (r) = λ ϕ ν0,λ (r), where λ = ±1 represents the eigen value ofτ 1 . We have omitted the spin index from the subscripts of ϕ ν0,λ because spin is always ↓. (ii) In contrast to the zero-energy states, the nonzeroenergy states are not the eigen states ofτ 1 . They are described by the linear combination of the two states: one belongs to λ = 1 and the other belongs to λ = −1. We prove the robustness of the highly degenerate ZESs in the dirty normal segment and the perfect Andreev reflection by taking these features into account. Next we analyze the edge states of an isolating Dresselhaus superconductor. From the second equation in Eq. (8), we can describe the wave function of the zeroenergy states where Y n (y) = 2/M sin(nπy/M ) is the wave function in the y direction and n indicates the transmission channel. In the x direction, we assume that the length of the superconductor is 2L (i.e., −L ≤ x ≤ L) and apply the hard-wall boundary conditions at its edges, ϕ n,λ (−L) = ϕ n,λ (L) = 0. By substituting Eq. (9) into the first equation in Eq. (8), we obtain where ξ D = ξ 0 /α D , ξ 0 = v F /∆ 0 , k n = 2m(µ n + V ex )/ , and µ n = µ − ( nπ/M ) 2 /(2m). The length of the superconductor must be long enough so that L/ξ D ≫ 1 is satisfied. For V ex > V c , we find two solutions as with q 2 n = k 2 n − ξ −2 D , where C L and C R are the normalization coefficients. It is easy to confirm that ϕ L n,− (x) localizing at the left edge belongs to λ = −1 and ϕ R n,+ (x) localizing at the right edge belongs to λ = 1 as schematically illustrated in Fig. 3. The field operator of an electron with spin-↓ is described as where γ † ν (γ ν ) is the creation (annihilation) operator of the Bogoliubov quasiparticle belonging to E ν andΞ is the charge conjugation operator with K indicating the complex conjugation. Eq. (14) represents the particle-hole symmetry of the BdG Hamiltonian. The wave function of the zero-energy states are described by ϕ L n (r) =ϕ L n,− (x)Y n (y), ϕ R n (r) = ϕ R n,+ (x)Y n (y), (15) with Eqs. (11) and (12). We can extract the electron field operator of a ZES for each propagating channel n as The operator γ L n (r) is pure imaginary while γ R n (r) is real in the present gauge choice. It is easy to show that they satisfy the Majorana relation γ L(R) n (r) † = γ L(R) n (r). This relation holds for all propagating channel n. Therefore the number of Majorana fermions at each edge is equal to the number of propagating channel at the spin-↓ sector N ↓ . Since the spin-↑ channels are absent for V ex > V c , N ↓ is equal to N c . They are degenerate at the zero-energy at the same place. However, at the left edge, for example, all of the ZESs belong to λ = −1 as shown in Eq. (11). According to the property (ii), such highly degenerate ZESs are robust under the potential disorder because the random potentials preserve the chiral symmetry and the ZESs with λ = 1 are absent there. In the ballistic limit, the perfect conductance quantization at the zero-bias is a common property of unconventional superconductors hosting ZESs with flat dispersion. For instance, the spin-singlet d xy -wave 30 and the spin-triplet f -wave with the pair potential proportional to k x (1 − 2k 2 y ) also show such drastic effect. The Hamiltonian of these superconductors also preserve the chiral symmetry. However, the highly degenerate ZESs are fragile under the potential disorder because the ZESs with two different chirality coexist at the same edge 10 . Therefore the presence of the chiral symmetry is not a sufficient condition for the anomalous proximity effect but only a necessary one. B. Perfect Andreev Reflection Finally and most importantly, we prove the stability of the highly degenerate ZESs in the dirty normal segment which is attached to the left edge of the superconductor as shown in Fig. 1(a). In the absence of impurity potentials, the wave function in the normal segment at E = 0 is decribed by where r ee n (r he n ) is the normal (Andreev) reflection coefficient at the channel n. The current conservation law implies |r ee n | 2 + r he n 2 = 1 at E = 0 for each channel. From the boundary conditions at the NS interface, the reflection coefficients are calculated to be r ee n = 0, r he n = −1, for all n. The wave function in Eq. (19) is turn out to be the eigen state ofτ 1 belonging to λ = −1, (i.e., ϕ N ∝ [1, −1] T ). This fact is unique to the p x -wave pairing symmetry. For d xy and f -wave cases, the ZESs of two different chirality coexist in the normal metal. In the present junction, all the ZESs in the normal metal have the same chirality of λ = −1 as well as the ZESs at the left edge of the superconductor. According to the property (ii), they cannot form the nonzero-energy states. Therefore, the ZESs can penetrate into the normal segment with keeping their high degree of degeneracy. This conclusion is also valid under the potential disorder because the impurity potential preserves the chiral symmetry and does not damage the pure chiral feature of the ZESs. In addition, it is possible to show that the pure chiral feature of the ZESs protects the perfect Andreev reflection into the dirty normal segment. According to the property (i), the ZESs must be the eigen state of τ 1 . We emphasize that the wave function in Eq. (19) can be the eigenstate ofτ 1 belonging to λ = −1 when and only when Eq. (20) is satisfied. Although the channel index n is no longer a good quantum number under the potential disorder, all the wave functions in the normal segment have the same vector structure reflecting the pure chiral nature. This is the mathematical requirement from the chiral symmetry. Physical consequence of the vector structure is the perfect Andreev reflection in the disordered junction at E = 0. This explains the perfect quantization of the zero-bias conductance at 2e 2 N c /h. IV. CONCLUSION In conclusion, we have discussed the stability of highly degenerate zero-energy states (ZESs) appearing in disordered junctions consisting of a superconducting thin film with Dresselhaus [110] spin-orbit coupling. The Dresselhaus superconductor hosts more than one ZES at its edges. When we make a normal-metal/superconductor junction of the Dresselhaus superconductor, such highly degenerate ZESs can penetrate in to the dirty normal segment and form the resonant transmission channels there. The analysis of the wave function in the normal segment shows that all the ZESs have the same chirality due to the effective p x -wave pairing symmetry. The perfect Andreev reflection into the dirty normal metal is a direct consequence of the the pure chiral feature of the ZESs. Our paper provides a microscopic understanding the anomalous proximity effect of spin-triplet superconductors. nology (MEXT) of Japan and by the Ministry of Education and Science of the Russian Federation (Grant No. 14Y.26.31.0007). The Hamiltonian in this basis is represented only by real numbers. Next we apply a transformation which is similar to the Foldy-Wouthysen transformation 31 to the BdG Hamiltonian in Eq. (B4). Using a unitary matrix with The diagonal term of Eq. (B4) can be expanded as with using the Baker-Housdorff formula. We assume large enough Zeeman potential so that α D = λ D k F /V ex ≪ 1 is satisfied where k F = √ 2mµ/ denotes Fermi wave number. From this assumption, we obtain within the first order of α D . The off-diagonal term corresponding to the pair potential is transformed to e iŜ (i∆ 0σ2 )e −iŜ = i∆ 0σ2 + i[Ŝ, i∆ 0σ2 ] + · · · where we assume the uniform pair potential (i.e., [p x , ∆ 0 ] = 0). As a result, the BdG Hamiltonian can be written aš By interchanging the second column and the third one, and by interchanging the second row and the third one, the Hamiltonian can be deformed aš These are the starting Hamiltonian in the analytic calculation. We find thatȞ 1 preserves chiral symmetry Finally, we discuss the symmetry property of H 0 in Eq. (B1) in its original basis. It is easy to show thatȞ 0 satisfies the relations, which represents the chiral symmetry. The Hamiltoniaň H 0 also satisfies, whereΞ 0 represents the charge conjugation with K meaning the complex conjugation. The first equation in Eq. (B20) represents the particle-hole symmetry.
2015-05-01T09:04:52.000Z
2014-11-13T00:00:00.000
{ "year": 2014, "sha1": "03c59f374298c531498679b48134410632ff9dcb", "oa_license": null, "oa_url": "https://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/59470/1/PhysRevB.91.174511.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7331c4b49e9d054557ef3f885366b996efa6207a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
268869390
pes2o/s2orc
v3-fos-license
Two new records and description of a new Perinereis (Annelida, Nereididae) species for the Saudi Arabian Red Sea region Abstract Annelid biodiversity studies in the Red Sea are limited and integrative taxonomy is needed to accurately improve reference libraries in the region. As part of the bioblitz effort in Saudi Arabia to assess the invertebrate biodiversity in the northern Red Sea and Gulf of Aqaba, Perinereis specimens from intertidal marine and lagoon-like rocky environments were selected for an independent assessment, given the known taxonomic ambiguities in this genus. This study used an integrative approach, combining molecular with morphological and geographic data. Our results demonstrate that specimens found mainly in the Gulf of Aqaba are not only morphologically different from other five similar Perinereis Group I species reported in the region, but phylogenetic analysis using available COI sequences from GenBank revealed different molecular operational taxonomic units, suggesting an undescribed species, P.kaustianasp. nov. The new species is genetically close and shares a similar paragnath pattern to the Indo-Pacific distributed P.helleri, in particular in Area III and Areas VII–VIII. Therefore, we suggest it may belong to the same species complex. However, P.kaustianasp. nov. differs from the latter mainly in the shorter length of the postero-dorsal tentacular cirri, median parapodia with much longer dorsal Tentacular cirri, posteriormost parapodia with much wider and greatly expanded dorsal ligules. Additionally, two new records are reported for the Saudi Neom area belonging to P.damietta and P.suezensis, previously described only for the Egyptian coast (Suez Canal) and are distributed sympatrically with the new species, but apparently not sympatric with each other. Introduction Based on genetic databases (i.e., BOLD and GenBank), and despite the recent advances in integrative studies focused on polychaetes (i.e., Nygren et al. 2010;Villalobos-Guerrero et al. 2021;Teixeira et al. 2023), there are still many taxonomic ambiguities and unidentified annelid species in some groups of Nereididae (i.e., Martin et al. 2021;Elgetany et al. 2022).Perinereis Kinberg, 1865 is one of the most diverse genera in this family, currently including between 97 (Wilson et al. 2023) to 106 (WoRMS Editorial Board 2024) valid species distributed worldwide.From these, approximately 16 species are reported for the Arabian Peninsula (Ocean Biodiversity Information System, OBIS ;Mohammad 1971;Wehe and Fiege 2002).Due to apparent similar paragnath patterns, overall body features and lack of detailed systematic studies, Perinereis species are often problematic to identify to the species level (Bakken and Wilson 2005;Yousefi et al. 2011).This has led to informal denomination of species complexes and recognition of geographic morphs and varieties such as P. cultrifera (Grube, 1840) species group (type locality: Naples, Italy; Scaps et al. 2000) and the P. nuntia (Lamarck, 1818) species (type locality: Gulf of Suez, Egypt) group (Wilson and Glasby 1993;Glasby and Hsieh 2006;Sampértegui et al. 2013), both reported for the Red Sea (OBIS).Thanks to molecular data, it is now easier to screen for potential new species with apparent similar morphotypes.Recent evidence comparing populations from different regions has shown that when specimens differ genetically, further analysis of the diagnostic morphological features often leads to the recognition of distinct features that were previously overlooked (i.e., Sampértegui et al. 2013;Teixeira et al. 2022b).A recent review on meiofauna (Cerca et al. 2018) and recent polychaete studies (i.e., Abe et al. 2019;Tilic et al. 2019;Martin et al. 2020), including from Nereididae (Glasby et al. 2013;Sampieri et al. 2021;Teixeira et al. 2022a, b) also demonstrate that cryptic and pseudo-cryptic species often have geographically restricted distributions, with the range of cryptic species being smaller than the parent morphospecies. The Egyptian side of the Red Sea has been the focus of an increasing amount of polychaete studies either reviewing existing species groups (i.e., Villalobos-Guerrero 2019) or describing new species that were previously considered cryptic (i.e., Elgetany et al. 2022).The northern Saudi Arabian Red Sea and Gulf of Aqaba, despite being expected to host a large biodiversity (Roberts et al. 2002;DiBattista et al. 2016), has seen comparatively few biodiversity studies involving molecular techniques, particularly for polychaetes.To address this gap, and document the invertebrate biodiversity of the region, a bioblitz was conducted in the Neom region (northern Saudi Arabian Red Sea and Gulf of Aqaba) to document the local biodiversity, with emphasis on mobile invertebrates and cryptobenthic fish.As part of this effort, this study used a molecular approach, combined with morphological and geographic data, to investigate Perinereis samples collected from marine intertidal and lagoon-like rocky environments of the northern Red Sea.In particular, we aimed to assess species distributions and to investigate whether specimens collected belonged to existing P. cultrifera group, P. nuntia group, to other similar Perinereis species reported for the region, or if new species were undescribed. Sampling effort The NEOM bioblitz sampling campaign surveyed 38 shallow and coral reef sites up to 25 meters depth and some intertidal habitats, along the northern region of the Saudi Arabian Red Sea and Gulf of Aqaba (Neom area).This initiative aims to initiate a biodiversity inventory of marine benthic invertebrates (mainly mobile) and cryptobenthic fish in the Red Sea using DNA barcoding and metabarcoding.Only intertidal marine and lagoon-like rocky environments were considered for the purpose of this study, in order to perform an independent assessment within Perinereis, given the known taxonomic ambiguities in several species within the genus from this particular habitat. Table 1 details the number of original specimens collected for each sampling location, which correspond to the same number of COI sequences analysed.The number of COI sequences from Perinereis species publicly available in GenBank, respective sampling area and references are also detailed in Table 1 and were used for comparison purposes.The collected Red Sea Perinereis specimens were deposited at NTNU University Museum, Trondheim, Norway (NTNU-VM, Bakken et al. 2024; vouchers: NTNU-VM-86010-NTNU-VM-86044).Perinereis oliveirae specimens are deposited at Biological Research Collection of the Department of Biology of the University of Aveiro (CoBI at DBUA; curated by Ascensão Ravara: aravara@ua.pt;vouchers: DBUA0002494.02.v01 and DBUA0002494.02.v02),Portugal.Specimens that were exhausted in the DNA analysis were assigned only with the Process ID from the BOLD systems (http://v4.boldsystems.org/),corresponding to MTPNO009-23 (Gulf of Aqaba, Magna).Some specimens were preserved in 96% ethanol and others in formalin with a respective sample tissue preserved in ethanol for molecular work (detailed in Suppl.material 1). DNA extraction, PCR amplification, and alignments DNA sequences of the 5' end of the mitochondrial cytochrome oxidase subunit I (mtCOI-5P) were obtained for all the collected Perinereis specimens and used for the main analysis.A representative number of specimens per location for the new species were also sequenced using the mitochondrial 16S rRNA and D2 region of nuclear 28S rRNA, for future reference purposes. DNA extraction was performed using QuickExtract DNA Extraction Solution (Lucigen) with 50 µl of the reagent per Eppendorf.The tubes were then transferred to a heat block at 65 °C for 30 min and then an additional 2 min at 98 °C.Depending on the specimen size, only a small amount of tissue (i.e., a single parapodium) or the posterior end of the worm was used. PCR reactions were performed using a premade PCR mix from VWR containing 10 µl per tube of Red Taq DNA polymerase Master Kit (2 mM, 1.1×), 0.5 µl of each primer (10 mM) and 1 µl of DNA template in a total 12 µl volume reaction.Table 2 displays the PCR conditions, primers and sequence lengths for the different markers.Amplification success was screened in a 1% agarose gel, using 1 μl of PCR product.Successful PCR products were then purified using the Exonuclease I and Shrimp Alkaline Phosphatase (ExoSAP-IT, Applied biosystems) protocol, according to manufacturer instructions.Cleaned up amplicons were sent to KAUST Sanger sequencing service for forward sequencing. Phylogenetic analysis and MOTU clustering For comparison purposes, GenBank COI sequence data from P. marionii (Audouin & Milne Edwards, 1833); P. vallata (Grube, 1857); P. helleri (Grube, 1878) and the outgroup Alitta virens (M.Sars, 1835) completed the final dataset (Table 1, Suppl.material 1).The phylogenetic analysis was performed through maximum likelihood (ML) for the entire dataset.Best-fit models were selected using the Akaike Information Criterion in MEGA.The phylogenetic relationship analysis was executed with 500 bootstrap runs using the General Time-Reversible model with gamma distributed rates and a portion of the sites invariable (GTR+G+I).The final version of the tree was edited with the software Inkscape v. 1.2 (https://www.inkscape.org). Three delimitation methods were applied to obtain Molecular Operational Taxonomic Units (MOTUs): The Barcode Index Number (BIN), which makes use of the Refined Single Linkage (RESL) algorithm available only in BOLD (Ratnasingham and Hebert 2013); the Assemble Species by Automatic Partitioning (ASAP, Puillandre et al. 2021), implemented in a web interface (https://bioinfo.mnhn.fr/abi/public/asap/asapweb.html) with default settings using the Kimura-2-Parameter (K2P) distance matrix; lastly, the Poisson Tree Processes (bPTP; Zhang et al. 2013) performed in a dedicated web interface (https:// species.h-its.org/),using the ML phylogeny obtained above, for 500000 MCMC generations and twenty-five percent of the samples discarded as burn-in. The mean genetic distances for mtCOI (K2P; Kimura 1980) within and between MOTUs were calculated in MEGA. Morphological analysis Specimens were studied using a Leica stereo microscope (model M205 C).Stereo microscope images were taken with a Flexacam C3 camera.Compound microscope images of parapodia and chaetae were obtained with a Leica DM2000 LED imaging light microscope, equipped with a Flexacam C3 camera, after mounting the parapodia on a slide preparation using Aqueous Permanent Mounting Medium (Supermount).Parapodial and chaetal terminology in the taxonomic section follows Bakken and Wilson (2005) with the modifications made by Villalobos-Guerrero and Bakken (2018).The final figure plates were edited with the software Inkscape v. 1.2. For measuring length of dorsal ligules, not only the lengths of the tips were considered, but the proximal part of the ligules was also included (e.g., Conde-Vela and Salazar-Vallejo 2015; Villalobos-Guerrero and Carrera-Parra 2015; Teixeira et al. 2022b).Like Hutchings et al. (1991), a specimen is described as having a greatly expanded dorsal notopodial ligule posteriorly only if the dorsal ligule is more than two times as long as the ventral ligule.For analysis of variation, only complete specimens were considered; total length (TL), length up to chaetiger 15 (L15), width at chaetiger 15 (W15) were measured with a millimetre rule under the stereomicroscope.Number of chaetigers (NC) were also taken into consideration.TL was measured from anterior margin of prostomium to the end of the pygidium, and W15 were measured excluding parapodia.Measurements of the length of the antennae (AL), palps (PL), dorsal cirri (DCL), dorsal ligule (DLL), ventral cirri (VCL), ventral ligule (VLL), median ligule, the length and width of the head (HL and HW, respectively), and the length of all four tentacular cirri, including the longest one (postero-dorsal cirri, DPCL), were also retrieved.Heterogomph falciger blade size comparison (short, long, and extra-long) based on Wilson et al (2023).Spiniger serration based on the comparison between P. cultrifera (lightly serrated) and P. rullieri (coarsely serrated) from Pilato (1974). Paragnath counts were performed to compare patterns with other morphologically similar Group I Perinereis species (Hutchings et al. 1991).Pharynx paragnath terminology follows Bakken et al. (2009) and paragnath description of areas VII and VIII follow Conde-Vela (2018). Terminology for molecular vouchers follows Pleijel et al. (2008) and Astrin et al. (2013).Overall description follows a similar structure to those of Villalobos-Guerrero (2019).Dates of sample collection follow the DD/MM/YY format. Phylogenetic analyses The phylogenetic reconstruction recovered ten MOTUs of Perinereis (Fig. 1A), the delimitation of which are cohesively supported by the three species-delimitation tests applied, except for MOTU 1 and GB1, which are clustered together with the ASAP method.Sequences from P. fayedensis and P. anderssoni are not present in BOLD and have no associated BIN. Taxonomic account Distribution and habitat.Confined to the northeastern Red Sea (Duba, Shushah Island) and Gulf of Aqaba (Magna) so far.Type locality: Saudi Arabia, Gulf of Aqaba: Magna region (marine site), 28°26'57.3"N,34°45'35.4"E.Specimens collected both in lagoon-like environments and fully marine sites in rocky areas, usually among coarse-grained sand under rocks.Apparently more abundant and easier to find in marine sites from the Gulf of Aqaba.Can be found in sympatry with P. damietta (Fig. 1B, C) and P. suezensis (Fig. 1B, D).The latter two species as described by Elgetany et al. (2022). Etymology.The species designation pays tribute to the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia, a globally recognized graduate-level research institution.This naming honours KAUST's substantial and enduring contributions to marine science, particularly in advancing our understanding of the Red Sea over the course of more than a decade.Through its dedicated research efforts, KAUST has significantly enriched the scientific community's knowledge of this unique marine environment.Description.Specimens used: NTNU-VM-86011 (holotype) and NT-NU-VM-86015 (paratype), both preserved in ethanol 96%, stored at NTNU University Museum (Norway, NTNU-VM). Head (Fig. 2A, B, E, J): Prostomium pyriform, 1.2× wider than long; 2.5× longer than antennae.Palps with a round or conical palpostyle (Fig. 2A); palpophore longer than wide, subequal to the entire length of prostomium.Antennae separated, gap half of antennal diameter (Fig. 2E); tapered, less than half the length of the palpophore.Eyes black, anterior and posterior pairs well separated (Fig. 2J).Anterior pair of eyes oval shaped, as wide as antennal diameter; posterior pair of eyes round or oval shaped, subequal width to anterior pair.Distance between the anterior eyes 1.25× longer than posterior ones.Nuchal organs covered by the tentacular belt. Pharynx: Pair of dark brown curved jaws with 7-8 denticles; two longitudinal canals emerging from the pulp cavity, both in the mid-section of the jaw (Fig. 2C).Pharynx consisting of maxillary and oral rings with conical shaped paragnaths (Fig. 2A, B).Maxillary ring: Area I = two small paragnaths arranged in a longitudinal line (Fig. 2F).Area II = Cluster of 5-7 small paragnaths (Fig. 2F).Area III = central patch of nine small paragnaths, lateral patches with two small paragnaths each (Fig. 2D).Area IV = 13 small paragnaths arranged in wedge shape without any bars (Fig. 2D).Oral ring: Area V = a triangle of three large paragnaths (Fig. 2E).Area VI (a+b) = two narrow bar-shaped paragnaths, one on each side, displayed as a straight line (Fig. 2E).Areas VII-VIII = 20-24 small paragnaths in total; Area VII, ridge region with two transverse paragnaths, furrow regions with two longitudinal paragnaths each (Fig. 2G); Area VIII, ridge regions with one paragnath each, furrow regions with two longitudinal paragnaths each (Fig. 2G). Remarks.Some nereidid species groups can have similar morphological features, including paragnath patterns, that may cause misidentifications.The new species COI clade revealed no GenBank match based on the BLAST tool.Perinereis kaustiana sp.nov.and a sequence belonging to a specimen from Malaysia identified as P. helleri (type locality: Bohol, Philippines) not only are sister to each other and phylogenetically close (Fig. 1A; 19.9 ± 2.4% K2P COI distance), but they also seem to share the same paragnath sizes, shapes and patterns (Park and Kim 2017: 255, fig. 4e; sampled in South Korea; no molecular data available), including in Area III, with the presence of lateral patches with two paragnaths each (Fig. 2D) and the same paragnath arrangements in the furrow and ridge regions of Areas VII-VIII (Fig. 2G).This makes them morphologically very similar and possibly belonging to the same cryptic complex, which could range from the Red Sea to the Indo-Pacific based on the available COI data.However, P. kaustiana sp.nov.seems to differ from P. helleri in some key features: shorter postero-dorsal tentacular cirri, reaching up to chaetiger 9, instead of the reported chaetiger 16 for P. helleri; median parapodia with much longer dorsal cirri (3×) compared to ventral one; posteriormost parapodia with much wider dorsal ligule (2.5-3.0×)than the median ligule (Fig. 3C, I) and dorsal ligule greatly expanded (3× longer than ventral ligule).Based on parapodia drawings from Hutchings et al. (1991: 255, fig. 9; Syntype ZMB Q3464), the ratio between dorsal and ventral cirri in P. helleri is subequal to slightly longer than ventral cirri throughout the body and posteriormost dorsal ligules with double the width of median ones and slightly expanded (up to 2× the length of Table 4. Comparison between selected characters in the most morphologically similar species to P. kaustiana sp.nov., reported for the Arabian Peninsula and Mediterranean Sea and lacking DNA data.The Indo-Pacific P. helleri is also included.Morphological details of paragnath patterns for P. cultrifera and P. rullieri species complexes also includes partial data from topotypical specimens belonging to the private collection of the first author, to be published in the forthcoming future.Mohammad 1971;Hutchings et al. 1991Hutchings et al. 1991;Pilato 1974Pilato 1974 the ventral ligules; Table 4).Furthermore, P. helleri from Hutchings et al. (1991) does not seem to possess ligules with finger-like ending tips. Characters Other species with similar paragnath patterns are Perinereis anderssoni (Kinberg 1865: 167-179; Park and Kim 2017: 255, fig. 4d) and Perinereis rullieri (Pilato 1974: 25-36, figs 1-4), which share the same small sized paragnaths as P. kaustiana sp.nov., but instead the former two species possess only one paragnath in each lateral patch of Area III and paragnaths in Areas VII and VIII are usually arranged in two regular rows, without any discernible pattern in the furrow or ridge regions.Perinereis anderssoni is reported in the Atlantic region of the American continent (type locality: Rio de Janeiro, Brazil), while P. rullieri is apparently restricted to the Mediterranean Sea (type locality: between Aci Trezza and Augusta, eastern coast of Sicily, Italy).Moreover, the morphological similar lineages found within the Perinereis cultrifera (Grube 1840: 74, fig. 6;Hutchings et al. 1991: 253-254, fig.8a-c) species complex, including P. euiini (Park and Kim 2017: 252-260, figs 1, 2, 4a, b, 5, tables 1, 4, described for South Korea), are different from P. kaustiana sp.nov.due to the overall larger paragnath sizes, lack of any lateral patches in Area III, and the presence of shorter heterogomph falcigers (Park and Kim 2017: 254, fig. 2L).Specimens of Perinereis cultrifera from Lobo et al. (2016) were misidentified and are in fact P. oliveirae (Horst 1889: 38-45, plate 3;Fauvel 1923: 354, fig.138 e-k), the latter characterised by the presence of three paragnaths in lateral patches in Area III, while this feature is absent in P. cultrifera.Perinereis oliveirae is described for the northern Iberian Peninsula, having also very long bar-shaped paragnaths in Areas VI and very short tentacular cirri compared to length of the head (reaching chaetigers 1 and 2).features were confirmed based on the two P. oliveirae specimens from this study and samples from the private collection of the first author of this study. Discussion Our molecular data provides compelling evidence for the existence of a new, deeply divergent, and completely sorted species within the Perinereis species Group I in the Red Sea.At first glance, P. kaustiana sp.nov.can be easily misidentified as the well-known and allegedly cosmopolitan P. cultrifera, due to the classic two bar shaped paragnaths in Areas VI and proximity with the Mediterranean Sea.This might be the reason the latter is usually reported for the Red Sea (Wehe and Fiege 2002;Bonyadi-Naeini et al. 2018;OBIS), but a greater sampling effort in the central and southern Red Sea regions are needed to confirm this.Morphological features, such as the paragnath arrangement, as well as the length of tentacular cirri and ratios within the parapodia also allowed the distinction of P. kaustiana sp.nov.from other similar species (see taxonomic key and Tables 4, 5).Upon careful morphological examination, P. kaustiana sp.nov. is morphologically closer to the Indo-Pacific P. helleri, than it is to the European P. cultrifera, based mainly on paragnath patterns, particularly in Areas III (Fig. 2D) and VII and VIII (Fig. 2G), and similar length of the falciger blades.Paragnath features in Areas VII and VIII lends support to the taxonomic importance of highlighting faint ridges and furrows in the ventral oral ring for certain Perinereis species (Conde-Vela 2018), which usually are not accounted in species descriptions due to no apparent pattern being found (i.e., Teixeira et al. 2022a).Perinereis kaustiana sp.nov.and P. helleri are also phylogenetically closely related (Fig. 1A), despite being divergent lineages, with genetic distances that are in the range used for delimitating polychaete species (i.e., Kvist 2016;Lobo et al. 2016;Nygren et al. 2018).This situ-* No available chaetae data for P. striolata. ation, together with the absence or subtle morphological differences previously overlooked, resembles cryptic lineages within a species complex (Teixeira et al. 2022b(Teixeira et al. , 2023)), and further sampling efforts between the Red Sea to the Indo-Pacific region are needed to assess this. The new species is so far unique to the northern Red Sea and apparently easy to find in the rocky beaches of the Gulf of Aqaba.Considering the high rate of endemism in the Red Sea (DiBattista et al. 2016), this species may indeed be endemic to this Sea, although further sampling across this region and the Indo-Pacific area might prove it to be more widespread.In the remaining sampling sites further south, along the northern Saudi coast, P. kaustiana sp.nov. is outcompeted by the sympatric distributed Perinereis nuntia species group, which seems to be the dominant coastal annelid in the region (Fig. 1B).The latter is also a species complex with several different species recently revised by Villalobos-Guerrero (2019).Our specimens initially identified as belonging to the P. nuntia complex revealed at least two different morphotypes, which after further morphological (mainly based on paragnath patterns, Fig. 1C, D) and molecular review corresponded to the new species recently described by Elgetany et al. (2022) for the neighbouring Egyptian coast (Suez Canal), namely P. damietta (Fig. 1C) and P. suezensis (Fig. 1D).These species are sympatric with P. kaustiana sp.nov., but apparently not sympatric with each other in the studied region (Fig. 1B).Perinereis damietta (which is morphologically more similar to P. heterodonta Gravier, 1899 than to P. nuntia according to Elgetany et al. (2022)), was found mainly in lagoon-like environments, whereas P. suezensis only in fully marine areas.Perinereis kaustiana sp.nov.shared both marine and lagoon-like habitats, with all the three sampled species found in intertidal coarse-grained sand, under rocks or cobles.As speculated by Elgetany et al. (2022), P. damietta seems to have a slightly wider habitat preference, since some of our specimens (from Al Muwaileh lagoon) also occurred sub-tidally, attached to small rocks at approximately 1 meter depth. Figure 1 . Figure 1.Phylogenetic tree and MOTU distribution for the three sampled Red Sea Perinereis species A maximum likelihood phylogeny based on COI sequences, with information regarding the different MOTU delineation methods.Numbered MOTUs (1-4) contain original sequences from Perinereis specimens analysed in this study; MOTUs "GB" are based on Perinereis sequences mined from GenBank; MOTU "OUTG" correspond to the rooted outgroup, Alitta virens.Bootstrap values lower than 80% not displayed B Red Sea MOTU distribution; each coloured pie corresponds to a unique species and respective abundance proportion; larger pie charts indicate higher number of sympatric species.Species from the Suez Canal based on mined GenBank sequences from Elgetany et al. (2022); abundance proportion based on type material C Perinereis damietta, focus on prostomium and pharynx, dorsal view, specimen NTNU-VM-86031 D Perinereis suezensis, focus on prostomium and pharynx, dorsal view, specimen NTNU-VM-86032 E Perinereis kaustiana sp.nov., focus on prostomium and pharynx, dorsal view, specimen NTNU-VM-86011.Scale bars: 500 μm (C-E). Table 1 . Species, number of sequences (n), geographic location, and their respective GenBank COI accession numbers for the original material and sequence data used from other studies. Table 2 . Primers and PCR conditions used in this study.
2024-04-03T15:16:47.477Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "2a9c4bf2b5d038e6b8a2b17064745f91383b02d8", "oa_license": "CCBY", "oa_url": "https://zookeys.pensoft.net/article/115260/download/pdf/", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f769f38a706cd73522ca9072ed0e6f20e314213", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
246342124
pes2o/s2orc
v3-fos-license
Effective Label-Free Sorting of Multipotent Mesenchymal Stem Cells from Clinical Bone Marrow Samples Mesenchymal stem cells (MSC) make up less than 1% of the bone marrow (BM). Several methods are used for their isolation such as gradient separation or centrifugation, but these methodologies are not direct and, thus, plastic adherence outgrowth or magnetic/fluorescent-activated sorting is required. To overcome this limitation, we investigated the use of a new separative technology to isolate MSCs from BM; it label-free separates cells based solely on their physical characteristics, preserving their native physical properties, and allows real-time visualization of cells. BM obtained from patients operated for osteochondral defects was directly concentrated in the operatory room and then analyzed using the new technology. Based on cell live-imaging and the sample profile, it was possible to highlight three fractions (F1, F2, F3), and the collected cells were evaluated in terms of their morphology, phenotype, CFU-F, and differentiation potential. Multipotent MSCs were found in F1: higher CFU-F activity and differentiation potential towards mesenchymal lineages compared to the other fractions. In addition, the technology depletes dead cells, removing unwanted red blood cells and non-progenitor stromal cells from the biological sample. This new technology provides an effective method to separate MSCs from fresh BM, maintaining their native characteristics and avoiding cell manipulation. This allows selective cell identification with a potential impact on regenerative medicine approaches in the orthopedic field and clinical applications. Introduction Bone marrow (BM) is one of the most studied sources of MSCs, whose therapeutic potential has been explored in several diseases, proving its efficacy in clinical trials, from heart failure to grafts versus host disease [1] and other pathologies. In the orthopedic field, the use of bone marrow concentrate (BMC) in substitution of or addition to the procedure of marrow stimulation, provides interesting results because it contains not only hematopoietic stem cells (HSCs) and mesenchymal stem cells (MSCs) as a source for regenerating tissues but also accessory cells that support angiogenesis and vasculogenesis by producing several growth factors [2]. It is important to notice that BM clot as a 3D environment supports the IV of the International Cartilage Regeneration & Joint Preservation Society (ICRS) and lesion dimension > 1.5 cm 2 . Exclusion criteria: osteoarthritis, malalignments or ankle instability, infectious disease or presence of haematological or rheumatological diseases and coagulation disorders. The Ethical Committee of the Institution approved the human protocol for this study (number: 004350). All investigations were conducted in conformity with ethical principles of research, and written informed consent was signed by all the patients enrolled into the study. BM was harvested following standard procedure [13] (Supplementary Material) and concentrated using the kit IOR-G1 (Novagenit, Mezzolombardo, Italy) to reduce volume from 60 to 6 mL directly in the operating room, removing most of the red blood cells (RBC) and plasma. Mononuclear cells were counted using crystal violet solution (Sigma Aldrich, St. Louis, MO, USA) to exclude red blood cells and 3 × 10 6 cells of BMC left unused from clinical treatment was processed using the gold-standard procedure by direct plating on tissue culture dish. Cells were plated at a cell density of 20,000 cells/cm 2 and cultured in expansion medium made of minimum essential medium Eagle alpha modification (α-MEM, Gibco, Rockville, MD, USA) supplemented with 15% of foetal bovine serum (FBS, Euroclone S.p.A., Milan, Italy) and 1% penicillinstreptomycin (Gibco). Cells were grown in a 5% CO 2 humidified chamber. Approximately 8 × 10 6 of the BMC was employed for the separation using Celector ® . Two additional samples were used for the method development to investigate the best sample preparation for MSC-isolation protocol by Celector ® . BM samples were equally divided into two parts; one part was processed using density gradient technique Ficoll ® following the manufacturer's instructions (Ficoll ® Paque, Sigma Aldrich, Darmstadt, Germany), and the second one was concentrated using the kit IOR-G1. Celector ® Instrument Celector ® instrument consists of a fluidic system and a biocompatible capillary separation device implementing a patented technology (IT1371772, US8263359 and CA2649234). The separation device is made of inert and biocompatible plastic material of 40 cm length, 4 cm width, and 250 µm thickness connected to the fluidic system. A micro-camera detector placed at the exit of the separation channel (USB 2.0 board-level camera -mvBlueFOX-MLC, Matrix Vision, Oppenweiler, Germany) monitors the elution process, generating a recorded plot of the eluted cell number as a function of time (fractogram) and for frames acquisition. A fraction collector is connected to the separation device. The instrument was placed inside a laminar flow cabinet to provide sterile working conditions. A schematic view of the instrumentation setup and fractionation procedure is reported in Figure 1. Sample Harvesting and Preparation BM was obtained from the iliac crest of 8 patients (mean age 35 [15 ÷ 66 y/o]; 3 females and 5 males) who underwent autologous cell transplantation for the treatment of osteochondral defects. The inclusion and exclusion criteria for the patients operated on with this treatment were the following. Inclusion criteria: osteochondral lesions grade III or IV of the International Cartilage Regeneration & Joint Preservation Society (ICRS) and lesion dimension > 1.5 cm 2 . Exclusion criteria: osteoarthritis, malalignments or ankle instability, infectious disease or presence of haematological or rheumatological diseases and coagulation disorders. The Ethical Committee of the Institution approved the human protocol for this study (number: 004350). All investigations were conducted in conformity with ethical principles of research, and written informed consent was signed by all the patients enrolled into the study. BM was harvested following standard procedure [13] (Supplementary Material) and concentrated using the kit IOR-G1 (Novagenit, Mezzolombardo, Italy) to reduce volume from 60 to 6 mL directly in the operating room, removing most of the red blood cells (RBC) and plasma. Mononuclear cells were counted using crystal violet solution (Sigma Aldrich, St. Louis, MO, USA) to exclude red blood cells and 3 × 10 6 cells of BMC left unused from clinical treatment was processed using the gold-standard procedure by direct plating on tissue culture dish. Cells were plated at a cell density of 20,000 cells/cm 2 and cultured in expansion medium made of minimum essential medium Eagle alpha modification (α-MEM, Gibco, Rockville, MD, USA) supplemented with 15% of foetal bovine serum (FBS, Euroclone S.p.A., Milan, Italy) and 1% penicillin-streptomycin (Gibco). Cells were grown in a 5% CO2 humidified chamber. Approximately 8 × 10 6 of the BMC was employed for the separation using Celector ® . Two additional samples were used for the method development to investigate the best sample preparation for MSC-isolation protocol by Celector ® . BM samples were equally divided into two parts; one part was processed using density gradient technique Ficoll ® following the manufacturer's instructions (Ficoll ® Paque, Sigma Aldrich, Darmstadt, Germany), and the second one was concentrated using the kit IOR-G1. Celector ® Instrument Celector ® instrument consists of a fluidic system and a biocompatible capillary separation device implementing a patented technology (IT1371772, US8263359 and CA2649234). The separation device is made of inert and biocompatible plastic material of 40 cm length, 4 cm width, and 250 µm thickness connected to the fluidic system. A microcamera detector placed at the exit of the separation channel (USB 2.0 board-level camera -mvBlueFOX-MLC, Matrix Vision, Oppenweiler, Germany) monitors the elution process, generating a recorded plot of the eluted cell number as a function of time (fractogram) and for frames acquisition. A fraction collector is connected to the separation device. The instrument was placed inside a laminar flow cabinet to provide sterile working conditions. A schematic view of the instrumentation setup and fractionation procedure is reported in Figure 1. Fractionation Principle and Procedure The separation is obtained in a rectangular-shaped capillary device, 4 cm wide and 250 µm high, where cell suspensions are eluted through a laminar flow of mobile phase ( Figure 1). Injected cells reach a specific position across the channel thickness during transportation due to the combined action of gravity, acting perpendicularly to the flow, and opposing the lift forces that depend on the morphological features of the sample. Cells at a specific position in the channel acquire well-defined velocities and, consequently, eluted at specific times. Cell suspension is injected into the system at a flow rate of 1 mL/min; subsequently, the flow is interrupted to allow sample relaxation, a process necessary to make analytes reach an equilibrium position along the channel thickness as a response of the external field. The relaxation time depends on the cell type, and it is usually a few minutes long for mammalian cells. Finally, sample elution was carried out by reactivating the flow of mobile phase [14,15]. The fractionation procedure involved, first, the decontamination of the fractionation system by flushing with cleaning solution at 1 mL/min flow rate. Next, the system was washed copiously with sterile, demineralized water at the same flow rate. Cells, MSCs in particular, can adhere to plastic: to block non-specific interaction sites on the plastic walls, the fractionation system was flushed at 0.5 mL/min with a sterile coating solution. Finally, it was filled with sterile mobile phase. All solutions were provided by Stem Sel Ltd. For the study, mononuclear cells from BMC were diluted to a final concentration of 8 × 10 6 cells per ml and 100 µL was injected. Cells were automatically re-dispersed 3 times to homogenize the suspension and eluted at a flow rate of 2 mL/min with a relaxation time of 3 min. Optical Analysis Eluted cells were monitored using a micro-camera detector, and the count-software (Stem Sel Ltd., Bologna, Italy) generated the fractogram. Dimension inclusion/exclusion criteria were set by the operator to refine the counting procedure. The dimension was set from 7 µm to unlimited µm with an average of 14 µm, to distinguish single cells from cell aggregates. The software was therefore able to recognize sizes ranging from small cells to large cell aggregates. Three captured images of eluting cells from each fraction were post-processed to measure the cell area using free imaging software Fiji (Image J software v 1.44p, NIH) with the 'analyze-particles' feature. Analysis and Cell Collection For every sample, cells were first analyzed to obtain a patient-specific fractogram and identify the fractions to collect. Consecutive analyses were run to increase the number of collected cells per fraction. The fractionated cells, collected in 50 mL tubes, were centrifuged and, subsequently, cell pellets were pull together; the cell number was counted using crystal violet solution (Sigma, St. Louis, MO, USA) to identify exclusively mononuclear cells and exclude RBCs. The percentage of enrichment was calculated, dividing the number of recovered cells, from each fraction, by the total number of injected cells. Downstream analysis, CFU-F assay, differentiation assay, and visualization of physical parameters by flow cytometry were performed with freshly collected cells; to obtain the right cell number for MSCs phenotype by flow cytometry analysis, cells needed to be expanded in vitro. Cell recovery for each fraction was compared to the integral of the output fractogram, the area under the curve. The integral was multiplied by 8 to obtain an approximation of the total number of cells because the camera framed one-eighth of the fluidic device's area. Every run of all individuals was analyzed, summed, and compared to cell recovery. Physical Characteristics Twenty thousand cells from fresh BMC were analyzed by flow cytometry to detect physical parameters, to visualize cells cloud on FSC and SSC plot. Colony-Forming Units-Fibroblast Assay (CFU-F) The clonogenic ability of the different fraction was determined by a low-density CFU-F assay. A total of 9500 mononuclear cells/cm 2 collected from each fraction and cells from BMC (used as internal control, CTRL) were seeded in 9.5 cm 2 dishes in expansion medium minimum essential medium Eagle alpha modification (α-MEM, Gibco BRL, Rockville, MD, USA) supplemented with 15% of foetal bovine serum (FBS, Euroclone S.p.A., Milan, Italy) and 1% penicillin-streptomycin (Gibco). The medium was changed twice a week. At 10 and 20 days and 14 days for the preliminary test, cells were fixed in methanol and stained with Crystal Violet (Sigma Aldrich). An aggregate containing more than 50 cells was considered as a colony originating from one cell. The number of colonies was counted using an inverted light microscope. Phenotype Characterization Cells from each fraction and CTRL were expanded for one passage and then analysed for the expression of mesenchymal (CD73, CD90, CD105) and haematopoietic (CD14, CD20, CD34, CD45) markers using the human MSC Phenotyping Kit (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany). This kit was developed for the standardized identification and phenotyping of cultured human MSCs by flow-cytometry based on the defined ISCT standards. Tubes were read with FACS Canto (BD Biosciences, San Jose, CA, USA). The results were analysed with FlowJo software (FlowJo v 1.44p, LCC). Differentiation Capacity A total of 150,000 mononuclear cells from each fraction and CTRL were seeded onto a 12-well plate in expansion medium. The medium, procedure, and quantification are explained in the Supplementary Material. Briefly, for chondrogenic differentiation, the medium was replaced after 24 h with chondrogenic medium, while for osteogenic differentiation, cells were immediately cultured in differentiation medium. Media were changed twice a week and cells were evaluated at 21 and 28 days. To assess differentiation, AlcianBlue and Alizarin Red staining were performed, respectively, for chondrogenic and osteogenic differentiation. Fractionation Method Development A method development was carried out in order to achieve an optimal cell recovery and cell enrichment of the putative MSCs in one fraction; in particular, the flow rate and relaxation parameter were studied. A suspension of 6 × 10 6 cells per ml was prepared, and 100 µL were injected per analysis with a flowrate of 1 and 2 mL/min. Cells eluted with a positively skewed profile ( Figure 2A) and doubling flow rate did not change the analysis resolution. Moreover, no difference was identified between samples obtained by Ficoll ® or IOR-G1 concentration step (data not shown). Conversely, adding a relaxation time of 3 min changed the profile ( Figure 2B). Two main peaks were observed, and a difference in the intensity was evident between the two preparations: higher for the concentration method compared to Ficoll ® preparation. Cells were recovered from the two main fractions and the presence of RBCs was identified mainly in the first one, which explained the lower intensity for the Ficoll ® preparation because of the MNCs enrichment and consequent RBCs depletion of this procedure. The opposite, few RBCs and many cells, was observed in the second fraction with a lower intensity for the Ficoll ® preparation probably caused by cell loss from the several washing-centrifugation steps. Therefore, we selected the concentration method Bioengineering 2022, 9, 49 6 of 15 because of its higher second peak and to avoid extra cell manipulation step. An additional clinical sample processed with the IOR-G1 system was analysed, performing multiple runs to obtain a higher number of collected cells ( Figure 2C). The sample was divided in 3 fractions (F1, F2, and F3), and cells were separately collected in sterile tubes for morphological and CFU-F analysis. By the overlapping profiles obtained from different injections, the system proved to maintain reproducibility among the run. When cells were plated, only F1 cells attached to the plastic surface and showed the same fibroblastic morphology of gold-standard culture. Freshly isolated cells were also plated to assess the clonogenicity, one of the gold-standard assays to define MSCs; as for the morphological aspect, only F1 cells showed CFU-F ability ( Figure 2E). Therefore, this protocol was further used for our study to isolate MSCs directly from BMC. was identified mainly in the first one, which explained the lower intensity for the Ficoll ® preparation because of the MNCs enrichment and consequent RBCs depletion of this procedure. The opposite, few RBCs and many cells, was observed in the second fraction with a lower intensity for the Ficoll ® preparation probably caused by cell loss from the several washing-centrifugation steps. Therefore, we selected the concentration method because of its higher second peak and to avoid extra cell manipulation step. An additional clinical sample processed with the IOR-G1 system was analysed, performing multiple runs to obtain a higher number of collected cells ( Figure 2C). The sample was divided in 3 fractions (F1, F2, and F3), and cells were separately collected in sterile tubes for morphological and CFU-F analysis. By the overlapping profiles obtained from different injections, the system proved to maintain reproducibility among the run. When cells were plated, only F1 cells attached to the plastic surface and showed the same fibroblastic morphology of goldstandard culture. Freshly isolated cells were also plated to assess the clonogenicity, one of the gold-standard assays to define MSCs; as for the morphological aspect, only F1 cells showed CFU-F ability ( Figure 2E). Therefore, this protocol was further used for our study to isolate MSCs directly from BMC. The two preparations were then tested adding the feature of the relaxation time of 3 min and run at a flowrate of 2 mL/min. This protocol gave a different profile, showing two main populations given by the two peaks. Bone marrow concentrated (CTRL) showed higher intensity in the first peak, which contained mainly RBCs, and also in the second one. Due to higher peak intensity not to mention avoidance of extra cell manipulation, the IOR concentration protocol was used for cell preparation. (C) One clinical sample was analysed to collect cells for different time interval (F1: 2-5 min; F2: 5-7 min; F3: 7-9 min) and the repetition of 4 analyses (RUN) showed reproducibility. Once cells were plated, only F1 cells attached to plate showed same morphology of CTRL (D) and showed ability to form CFU-F (E). The two preparations were then tested adding the feature of the relaxation time of 3 min and run at a flowrate of 2 mL/min. This protocol gave a different profile, showing two main populations given by the two peaks. Bone marrow concentrated (CTRL) showed higher intensity in the first peak, which contained mainly RBCs, and also in the second one. Due to higher peak intensity not to mention avoidance of extra cell manipulation, the IOR concentration protocol was used for cell preparation. (C) One clinical sample was analysed to collect cells for different time interval (F1: 2-5 min; F2: 5-7 min; F3: 7-9 min) and the repetition of 4 analyses (RUN) showed reproducibility. Once cells were plated, only F1 cells attached to plate showed same morphology of CTRL (D) and showed ability to form CFU-F (E). Celector ® Bone Marrow Concentrate Profile Eight samples of BMC were harvested and resulted viable from cell counting. Cells were injected into Celector ® , and the typical fractogram was obtained ( Figure 3B). Profile comparison from all individuals confirmed reproducibility (Supplementary Figure S1). During fractionation analysis with Celector ® , unretained cells, likely dead cells and debris, were eluted after 60 s from the start of the analysis (void). A deep look into the live images of eluting cells underlined the presence of cells, many in aggregates which eluted before the main peak of RBCs ( Figure 3A). The presence of these cells was also captured by the counting system, showing a small hump just before the first peak ( Figure 3B, arrow). Sorting was arranged to recover these cells and divide them from the majority of RBCs. Eluted cells were divided into three fractions based on eluting time: fraction 1 (F1) approximately from 1 to 5 min, fraction 2 (F2) from 5 to 9 min, and fraction 3 (F3) from 9 to 14 min. For each sample, 10 consecutive analyses were performed: the first was run to define the individual's fractogram and sample collection intervals, and the following 9 identical runs to obtain a higher cell number per fraction. BM contains a variety of cell types and sizes. In the majority of the cases, F1 contained many cell aggregates, and when single cells were analysed using ImageJ software, the average diameter was around 9 µm. F2 cells had a lower dimension of 8 µm, similar to RBCs, and F3 cells appeared to have sharper contours and a diameter of 9.7 µm ( Figure 3C). Moreover, this result showed how the cell density is also an important characteristic for cell separation because, based on principles, smaller cells elute later in the analysis; therefore, the cell density of F1 could be an important factor among the ultrastructural characteristics that influenced the sorting process [16]. Moreover, cells derived in F1 are indeed smaller; however, because they are mostly in an aggregate form, they reach a higher position across the channel thickness, acquiring a higher velocity and, consequently, an early exit. Right after sorting, cell recovery for each individual was similar, increasing from F1 to F3 ( Figure 3D, with a percentage of enrichment of 10% for F1, 20% for F2, and 45% for F3, and a general cell recovery set at 75%. Interestingly, the higher number of recovered cells in F3 did not correspond to a higher ability to adhere to plastic surfaces, one of the definitions of MSCs. Few cells attached to the culture plate from F2 and F3 and replicated very slowly. F1 cells adhered and proliferated in every sample, confirming the preliminary results. The new sorting intervals were more efficient to obtain only adherent cells and exclude all unwanted RBCs. In order to explore the predictivity of the fractogram, cell numbers for each fraction were extrapolated by fractogram area for each individual ( Figure 3E). Number of F1 cells were similar between the two systems, cell counting and software analysis, while F2 cells were much higher in the software version because the majority of F2 cells are RBCs, which are counted by the software but are excluded by the crystal violet staining. The software counted more cells in F3 compared to F1, which is in accordance with the cell recovery trend observed, but the F3 value was underestimated compared to the cell recovery number. This could be explained by an instrumental limitation in its first version; F3 cells eluted in small groups and the image software could have counted the groups of closing cells as single cells. Representation of Cells Physical Parameters Fresh BMC cells were analysed by flow cytometry as one of the gold-standard techniques to analyze cells in their size and complexity in order to compare it with Celector ® data output. BMC cells were plotted using the FSC/SSC parameters, and we observed a heterogeneous population of cells that were very broad in dimension (FSC parameter) and intracellular complexity (SSC parameter) with no clear distinction of different populations ( Figure 3F). It is impossible by flow-cytometry, using only physical parameters, to distinguish the multipotent MSCs. Bioengineering 2022, 9, x FOR PEER REVIEW 8 of 16 . Every run of each individual was calculated and then average was graphed. Cell count for F1 was very similar between the two systems while F2 cell counted by the software (F2 sw) was higher due to presence of RBCs which are not manually counted by Crystal violet staining. F3 sw count gave a lower number because of a technical limitation of miscalculation due to closing eluting cells which were considered single cells instead of a group. (F) Flow cytometry data of fresh BMC show great heterogeneity among cells. No distinct populations were observed if only physical parameters were used. All data are represented as mean ± SD, multi-parametric one-way ANOVA test: ** p < 0.01, *** p < 0.001, **** p < 0.0001. In order to explore the predictivity of the fractogram, cell numbers for each fraction were extrapolated by fractogram area for each individual ( Figure 3E). Number of F1 cells were similar between the two systems, cell counting and software analysis, while F2 cells F2: 5-9 min; F3: 9-14 min). (C) Cell diameter measurement of eluting cells using ImageJ software and "analyze particles" plugin. Three screenshots per fraction were used and diameter is expressed in µm. Cell aggregates were excluded to obtain only single-cell dimension and F1 cells had a diameter of 9 µm; F2 cells, mostly RBCs, had a diameter of 8 µm; and F3 had a diameter of 9.7 µm. (D) Graph representation of the average cell recovery for each fraction for 8 samples analyzed and the percentage of cell enrichment of each fraction compared to the total number of cells injected into the system. Recovery and enrichment raised from F1 to F3. (E) Quality-control feature of counting software of Celector ® to identify cell recovery for a specific fraction. Cell number of each fraction was measured by the integral of the area designed by the profile (cells vs. time) (F1-2-3 sw) and compared to cell number from crystal violet count (F1-2-3 cr). Every run of each individual was calculated and then average was graphed. Cell count for F1 was very similar between the two systems while F2 cell counted by the software (F2 sw) was higher due to presence of RBCs which are not manually counted by Crystal violet staining. F3 sw count gave a lower number because of a technical limitation of miscalculation due to closing eluting cells which were considered single cells instead of a group. (F) Flow cytometry data of fresh BMC show great heterogeneity among cells. No distinct populations were observed if only physical parameters were used. All data are represented as mean ± SD, multi-parametric one-way ANOVA test: ** p < 0.01, *** p < 0.001, **** p < 0.0001. Morphological and Phenotypical Analysis Collected cells from each fraction and CTRL were expanded in culture to assess their morphology and proliferation. Morphological difference was noted among cells from different fractions. F1 cells resembled CTRL cells ( Figure 4A(i)), with a typical fibroblasticlike shape and growing in colonies ( Figure 4A(ii)). F2 cells still expressed fibroblastic-like shape and an arrangement of spiral-shaped growth ( Figure 4A(iii)), while F3 cells looked different with a wider cytoplasm, irregular contour and long pedicles extruding from the cell body ( Figure 4A(iv)). One of the main observations was the number of adherent cells per fraction: F1 contained more adherent cells and grew faster in culture, so it probably contained the most proliferative clones; few cells from F2 attached to the plate and slowly grew, whereas only a couple of cells from F3 adhered to plastic and had a slow replicative pace. F1 cells reached confluence in 20 days, like CTRL, while F2 and F3 cells reached less than 50% of confluence in approximately 40 days. Expanded cells were detached and stained for mesenchymal and hematopoietic markers for flow cytometry analysis following ISCT standards ( Figure 4B). Cells from each fraction expressed the same percentage of the mesenchymal markers (CD90, CD105, CD73: 100%), while the combination of the hematopoietic markers increased from F1 to F3 ( Figure 4B), suggesting the presence of adherent cells of hematopoietic origin [17]. These results confirmed that the canonical surface protein expression of MSCs cannot be the only criterion to define MSCs. Clonogenic Activity Cells from each fraction and CTRL were tested for their clonogenic activity based on the CFU-F assay as one of the definitions for MSCs. F1 cells demonstrated clonogenic ability, and we noted a clear trend of higher clonogenic potential in F1 compared to CTRL already after 10 days, confirmed after 20. The colony-forming capacities in F2 and F3 were very different from F1. CFU-F were very low in F2 samples, and almost no colonies were present in F3 ( Figure 4C). The few CFU-F colonies found in F2 were likely derived from the F1 peak's tail. Differentiation Capacity The third requisite to define MSCs for ISCT is their ability to differentiate towards the mesenchymal lineages. We focused on chondrogenic and osteogenic differentiation ability because of the clinical perspective of this study and the use of BM-MSCs for osteochondral regeneration. We proved that only F1 cells have a differentiation potential towards osteogenic and chondrogenic lineage compared to the other two fractions. F2 cells poorly differentiated towards the two lineages, and F3 cells did not show any differentiation ability ( Figure 5A,B). We observed that the stained area of F1 cells was more homogenous among individuals, with a symmetrical distribution of the stained area compared to the CTRL (skewness: chondrogenic F1 vs. CTRL: 0.7 vs. 1.5; osteogenic F1 vs. CTRL: 0.72 vs. 1.66). Expanded cells were detached and stained for mesenchymal and hematopoietic markers for flow cytometry analysis following ISCT standards ( Figure 4B). Cells from each fraction expressed the same percentage of the mesenchymal markers (CD90, CD105, CD73: 100%), while the combination of the hematopoietic markers increased from F1 to F1, F2, and F3 fractions after 10 days of culture. Cells from CTRL and F1 showed the typical fibroblastic-like shape and colonies with epithelial morphology were observed in F2 cell culture, with colony growing in a spiral form and cells showing a wide cytoplasm and long extensions. Cells from F3 were very few, did not form colonies, and had a wide cytoplasm (scale bar: 100 µm). (B) Flow cytometry analysis showed a high and homogenous expression of mesenchymal markers CD90, CD73, and CD105 in cells from all fractions but an increase in the hematopoietic markers CD45, CD34, CD14, and CD20 in cells from F2 and F3. (C) Cells collected from each fraction and control were plated at low density to test capacity to form CFU-F. Assay was performed at 10 and 20 days. Colonies were stained with Crystal Violet and quantification of the number of colonies in each sample was graphed (D). After 10 days, F1 cells displayed higher capacity to form CFU-F compared to CTRL (8.9 vs. 4.4 CFU-F, p= 0.0708), and it was maintained after 20 days. Very few colonies were observed in F2 and none in F3 (mean ± SD, multi-parametric one-way ANOVA test: * p < 0.05, ** p < 0.01, *** p < 0.001). Semi-quantitative analysis was performed measuring stained area by Image J software. Results were calculated for differentiation for osteogenic and chondrogenic differentiation after 21 days (C) and 28 days in culture (D) (mean ± SD, multi-parametric one-way ANOVA test: * p < 0.05; ** p < 0.01, *** p < 0.001, **** p < 0.0001). Discussion The use of MSCs in clinical trials is increasing but methods to isolate and produce cells among centres is largely heterogeneous [1]. Donor-to-donor variability [18] and different isolation/culturing protocols of primary BM-MSCs often result in the heterogeneity of clinical outcomes [19]. MSCs are defined by their surface markers expression and ability to form colonies and differentiate towards mesenchymal lineages. The International Society of Cell Therapy (ISCT) proposed CD90, CD105, and CD73 as the essential markers to define MSCs and lack of expression of CD45, CD34, CD14 or CD11b, CD79α or CD19, and HLA-DR surface molecules [6]. A pre-enrichment step using magnetic beads is necessary before multi-labelled sorting [20], even though these markers are not specific and can also be expressed by endothelial cells and fibroblasts [21,22]. Cells isolated by plastic adherence and then characterized for several mesenchymal markers and differentiation potential in vitro/in vivo cannot discriminate the best BM-MSCs population since even single strain populations differ in phenotype and multipotency [19]. To overcome these shortcomings, we established a new method to isolate MSCs from freshly harvested bone mar- Semi-quantitative analysis was performed measuring stained area by Image J software. Results were calculated for differentiation for osteogenic and chondrogenic differentiation after 21 days (C) and 28 days in culture (D) (mean ± SD, multi-parametric one-way ANOVA test: * p < 0.05; ** p < 0.01, *** p < 0.001, **** p < 0.0001). Discussion The use of MSCs in clinical trials is increasing but methods to isolate and produce cells among centres is largely heterogeneous [1]. Donor-to-donor variability [18] and different isolation/culturing protocols of primary BM-MSCs often result in the heterogeneity of clinical outcomes [19]. MSCs are defined by their surface markers expression and ability to form colonies and differentiate towards mesenchymal lineages. The International Society of Cell Therapy (ISCT) proposed CD90, CD105, and CD73 as the essential markers to define MSCs and lack of expression of CD45, CD34, CD14 or CD11b, CD79α or CD19, and HLA-DR surface molecules [6]. A pre-enrichment step using magnetic beads is necessary before multi-labelled sorting [20], even though these markers are not specific and can also be expressed by endothelial cells and fibroblasts [21,22]. Cells isolated by plastic adherence and then characterized for several mesenchymal markers and differentiation potential in vitro/in vivo cannot discriminate the best BM-MSCs population since even single strain populations differ in phenotype and multipotency [19]. To overcome these shortcomings, we established a new method to isolate MSCs from freshly harvested bone marrow concentrate (BMC), which guarantees to a greater extent to maximize the stem cell presence. The new isolation method is characterized by a label-free technology called Celector ® . Celector ® can be considered a "cell-chromatograph": cell populations are sorted according to their intrinsic physical properties (dimension, morphology, density). Different cell populations are eluted at different times and collected, obtaining a population that is homogeneous in its physical characteristics. We first optimize the isolation protocol, acting on different flowrates and stop-flow features using two cell preparations: density gradient and BMC. The use of BMC allowed a higher cell recovery as shown by peak intensity compared to density gradient step; it avoids multiple washing/centrifuge steps that could cause cell loss, and clinically, its use is increasing for osteo-cartilage regeneration and for the treatment of several orthopaedic pathologies [23]. The instrument was able to discriminate three main populations within BMC, and only one demonstrated all stemness characteristics defined by the ISCT, namely F1. F1 cells were the only ones able to adhere to plastic, form CFU-F, and differentiate towards mesenchymal lineages. The microfluidic system demonstrated an optimal sorting resolution with 75% of cell recovery while preserving viability and proliferative and multipotent cell characteristics. The system was continuously running to obtain a high cell number, and it did not affect the sorting performance. The preservation of stem cell integrity and potential is imputed to the low mechanical and shear stress during the separation process. Moreover, the innovative approach visualizes and sorts cell aggregates, which are usually an obstacle for sorting technologies, increasing the isolation of MSCs. These cell aggregates in F1 could be the hematons, cell aggregates isolated from BM composed of hematopoietic and MSCs [24][25][26]. The MSC enrichment in the F1 was 9% of the total BMC, a very interesting datum since it is known that MSCs from bone marrow are around 0.1%. This higher percentage could be explained by a higher efficiency in sorting proliferative and lively cells from the niche, in single or aggregate state. F1 cells showed a higher trend in their clonogenic potential compared to standard culture. We hypothesized that the exclusion of the other 65% of recovered cells from F2 and F3 could positively affect this result. F1 cells also showed a more homogeneous differentiation ability towards osteogenic and chondrogenic lineages among individuals compared to CTRL. Notwithstanding only a few cells from F2 and F3 attached to plastic culture dishes and had a low proliferative rate, they were the majority of recovered cells after the isolation procedure, 20 and 45%, respectively. These cells had a heterogeneous morphology, with the arrangement of spiral-shaped growth cells and cells with wider cytoplasm [19]. Previous works from the Prockop group also showed differences in morphology among BM-MSCs [27][28][29]: smaller cells were isolated by physical parameters with flow cytometry but did not express a different percentage of MSC markers compared to the whole sample as confirmation that no surface epitopes are able to distinguish subpopulations in different preparations of MSCs. We confirmed this observation: adherent cells from F2 and F3 equally expressed mesenchymal markers compared to F1 and CTRL. However, F2 and F3 cells showed a higher percentage of hematopoietic markers, 9 and 15%, and it is known that the hematopoietic fraction of BM-MSCs is around 20-30% in human BM-MSCs cultures [17], which is exactly the sum of expression of F2 and F3 cells. These fractions are likely to contain the majority of this sub-population. Celector ® shows an added value in the isolation of BM-MSCs; it cleans the red blood cells (RBCs) from the BM. The majority of RBCs were collected in F2, so BM-MSCs culture was depleted from these unwanted cells. RBCs can obstruct cell adhesion, and it was proven that they can affect the functionality of isolated BMCs and impaired organ recovery in patients with acute myocardial infarction [30]. One of the data outputs of Celector ® is the sample fractogram and live images, a fingerprint analysis of the cell composition that discriminates differences in cell populations and gives immediate feedback on the presence of MSCs. The micro-camera detects, in live mode, the variety of cell population, discriminating between single cells or aggregates and unwanted unretained material. Live images and post-processing analysis are interesting features that need to be upgraded, but they already offer extra information on the cell morphology. In addition, the cell number extrapolated from fractogram from F1 was comparable to its cell recovery. Therefore, the presence of the hump in the first part of the profile is a sign of the presence of MSCs in that specific patient and could be used as a predictive outcome for cell usage. In conclusion, we tested and proved the efficacy of a new technique for the isolation and enrichment of BM-MSCs from raw samples. The process is reproducible among patients, with a characteristic profile. Cells derived from all three fractions highly expressed mesenchymal markers, but the technology could discriminate the fraction containing the actual multipotent MSCs based only on their native physical properties. Moreover, the system allowed purifying raw samples, with a depletion of differentiated, hematopoietic cells, and RBCs. Further studies need to be performed in order to obtain insights into the stemness and paracrine potential of the selected cell population. These are very promising results which open interesting perspectives for the use of this technology to improve stem cell isolation and quality control from raw clinical samples. The technology could be implemented in order to process a higher number of cells to obtain the sufficient cell amounts for preclinical applications within less cycles and to ameliorate cell recognition and cell counting to be used when there is a necessity to compare the quality of MSCs due to differences in harvesting techniques, isolation, culture conditions, and different harvesting sites, resulting in different MSC yields [31,32]. Patents Celector ® is based on a technology patented in Italy (No. IT1371772, "Method and Device to separate totipotent stem cells"), in the USA, and in Canada (No. 8,263,359 US en. CA2649234, "Method and device to separate stem cells"). Stem Sel ® also has an Italian patent (IT1426514, "Device for the Fractionation of Objects and Fractionation Method, allowed 2016).
2022-01-28T17:13:45.723Z
2022-01-22T00:00:00.000
{ "year": 2022, "sha1": "f35d39b18f336a928e9a0bd97b7e819784c04234", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-5354/9/2/49/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ca0c6981d9cb0f787e3e7b3a7cb474d78dcab38b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231786493
pes2o/s2orc
v3-fos-license
Fixed-point Quantization of Convolutional Neural Networks for Quantized Inference on Embedded Platforms Convolutional Neural Networks (CNNs) have proven to be a powerful state-of-the-art method for image classification tasks. One drawback however is the high computational complexity and high memory consumption of CNNs which makes them unfeasible for execution on embedded platforms which are constrained on physical resources needed to support CNNs. Quantization has often been used to efficiently optimize CNNs for memory and computational complexity at the cost of a loss of prediction accuracy. We therefore propose a method to optimally quantize the weights, biases and activations of each layer of a pre-trained CNN while controlling the loss in inference accuracy to enable quantized inference. We quantize the 32-bit floating-point precision parameters to low bitwidth fixed-point representations thereby finding optimal bitwidths and fractional offsets for parameters of each layer of a given CNN. We quantize parameters of a CNN post-training without re-training it. Our method is designed to quantize parameters of a CNN taking into account how other parameters are quantized because ignoring quantization errors due to other quantized parameters leads to a low precision CNN with accuracy losses of up to 50% which is far beyond what is acceptable. Our final method therefore gives a low precision CNN with accuracy losses of less than 1%. As compared to a method used by commercial tools that quantize all parameters to 8-bits, our approach provides quantized CNN with averages of 53% lower memory consumption and 77.5% lower cost of executing multiplications for the two CNNs trained on the four datasets that we tested our work on. We find that layer-wise quantization of parameters significantly helps in this process. I. INTRODUCTION Recent developments in Deep Learning have drawn significant attention from research and industry, especially considering the ability of neural networks to efficiently deal with tasks such as image recognition and classification, object detection, speech recognition, word prediction etc. with high accuracy, even surpassing human capabilities. However, one shortcoming of these networks, as noted by many of the cited works in this paper, is the cost with respect to computational complexity and memory. While hardware such as GPUs and CPUs efficiently support these requirements for neural networks, these costs can be a problem when implementing these networks in practice for an embedded application. As compared to general purpose CPUs and GPUs, embedded systems are designed for specific applications and are restricted with respect to physical resources. They are quite small in size, have low on-chip memory, fewer arithmetic and logic units (ALUs) and therefore also consume less energy. They also lack the same level of parallelism found in CPUs and GPUs. Given the lack of ALUs, they often only support simple operations with low precision numbers. Many small embedded systems do not have ALUs for operations on commonly supported floating-point numbers and therefore require some software subroutines to manipulate floating-point numbers which is a more time consuming process. In such a case, a direct implementation of computationally expensive and high memory consuming algorithms such as neural networks is unfeasible without further optimizations. A solution to this problem is therefore to either design custom hardware to accelerate neural networks or to optimize neural networks for existing embedded hardware such as Field Programmable Gate Arrays (FPGAs) and micro-controllers or a combination of the two approaches. We choose the latter approach by reducing numerical precision from the commonly used 32-bit floating-point precision to integer/fixed-point precision. This process of reducing the precision of numbers, done by mapping numbers from a larger set to a smaller and more discrete set is known as quantization. Since integer operations are simpler than floating-point operations, the process of quantization simplifies computational complexity and increases computational speed. Quantizing to integer precision then removes the need for floating-point precision hardware which will in turn reduce energy consumption. We can also reduce the number of bits used to represent these numbers as much as possible, which would reduce the memory requirements of the model. In signal processing, the process of quantization however leads to quantization errors. These errors are the differences between the original and the quantized values. In the context of neural networks, we can expect the quantization errors because of the quantized parameters to have a direct impact on the prediction accuracy of the network. We therefore must reduce the numerical precision of the network as much as possible, while minimizing the impact on the prediction accuracy. In this paper we propose a method that can optimally 1 quantize the parameters, namely weights, biases and activations of each convolutional and dense layer in a pre-trained Convolutional Neural Network (CNN) in order to make it compatible for, and enable efficient quantized inference on embedded platforms. We quantize the parameters by reducing their precision from floating-point to the fixed-point number representation. Fixed-point numbers are treated like integers in hardware therefore allowing simple and fast operations, but include a scaling offset that allows for limited fractional precision. Our proposed method takes a CNN model, analyzes it, and returns the optimal fixed-point number representations for parameters of each layer in the network while ensuring an acceptable loss in inference accuracy. The resulting fixed-point representations can then be used to implement a low precision CNN in a lower level language like C, for deployment on an embedded platform. As opposed to compressing a given neural network for a specific embedded platform, our method is aimed at quantizing parameters of a given CNN for deployment on any embedded platform of choice. Our work is limited to the design of this method and is therefore not concerned with the implementation of the resulting quantized model on embedded platforms. While multiple works in the literature train and re-train the CNN during the quantization process, our method relies on post-training quantization where we quantize the parameters of a pre-trained CNN without re-training the network. Additionally, even though all parameters of the CNN can be uniformly quantized to the same level of precision, we vary this for parameters of each layer as we find variance in the level of precision that parameters of certain layers require. As our results will show, this leads to our method giving quantized models with much lower memory consumption while ensuring an acceptable accuracy loss of the low precision network. Although most CPUs and GPUs only support bitwidths of 4,8,16,32, 64, we also consider arbitrary bitwidths that are not limited to these. We investigate this in order to understand if there is an incentive for developing dedicated hardware for these non-conventional bitwidths in the future. We also limit our scope to CNNs specifically for image classification problems. We first present and discuss commonly referenced works in the literature in Section II that have approached neural network quantization with state-of-the-art results. In Section III we then present some preliminaries and precisely formulate the problem and research question we aim to address. Section IV presents some initial analysis of experimentation that is used to understand how the quantization of parameters of the network affects the inference accuracy. Using the observations and conclusions from this analysis, we design an algorithm to efficiently find the optimal precision levels for each parameter of each layer in the network in Section V. Section VI presents the results of the algorithm that was tested on two different CNN architectures trained on four different datasets and we compare the low-precision model of our algorithm to low-precision CNNs generated by a simple baseline method commonly used by commercial tools. Finally, we present our conclusions in Section VII. II. RELATED WORK There is a significant amount of work in the literature that has explored reducing computational complexity and memory requirements of neural networks using quantization. [11] presents a clear overview of the topic of quantized neural networks, with details on the popular methods and techniques. Works have presented novel techniques to quantize parameters of neural networks. The proposed techniques tend to fall under two general approaches, namely Quantized training, or Posttraining quantization. Quantized training has been a popular approach that is focused on training neural networks using low-precision number representations for weights, activations, and in some cases gradients. Popular works have successfully used binary numbers [8], [9], [18] and integer arithmetic [13], [22] reaching inference accuracy close to that of the original network. Works that have used this approach have therefore been able to replace many expensive operations with simple integer or bit-wise operations therefore reducing training time and complexity. One of the common arguments against this approach is the problem of gradient mismatch due to a difference between the quantized and full-precision activation function which causes a problem with gradient updates during gradient descent [17]. The aforementioned papers and many others using this approach also often need to rely on full-precision parameters for gradient and parameter updates to ensure convergence of gradient descent. In post-training quantization, parameters of a network is quantized to a low-precision network after training the network with full precision parameters. Parameters of the network are then only quantized for efficient inference as compared to efficient training. There is however a limit to how much the arithmetic precision of the network can be lowered without further degradation in the inference accuracy of the network. Many works choose to then re-train the network postquantization with the original training data, either once or multiple times during quantization [1], [3], [12], [19]. This helps recover losses incurred due to quantization, allowing for harsher quantization of the parameters of the network. The resulting quantized network allows for significant model size reduction with prediction accuracy being on par with their floating-point precision counterparts. In [12], the authors of DeepCompression take a pre-trained model and pass it through their three stage pipeline that consists of pruning of the weights, quantization of the weights and finally Huffman encoding them for 35-49x reduction in model size. During this process, they re-train the network multiple times to ensure almost no accuracy is lost. However the aforementioned approaches are not ideal for practice, and especially not for a quick and efficient deployment of a low-precision CNN model. The reasons for this, as discussed by [2] is that these approaches require access to the original training data which for privacy reasons may not be possible. Secondly, the processes are also time consuming given the additional time needed for training and re-training. Finally, the networks might require additional optimizations for specific platforms or applications. For quick and efficient deployment of a quantized model for efficient quantized inference, we may instead quantize CNNs post-training with acceptable accuracy losses [2], [6], [16], [23]. Of the many highly cited works that utilize this approach, the most common technique uses k-means clustering for weights of a network that was, to the best of our knowledge, originally presented by [10]. The authors use a code-book to store k centroids, each of which replaces a group of weights. Since we now only need to store the code-book of k centroids in memory, we can significantly reduce memory consumption of the CNN model. Therefore weights that pertain to the cluster with centroid k can easily be referenced using the codebook. Many papers have used this approach and some have worked to improve upon it [5], [20]. A disadvantage of using k-means as highlighted by [5] is that the k-means approach does not allow for control on the performance loss due to quantization. Our work also takes inspiration from the conclusions found in [14]. They find that using a per-layer granularity for quantization is more beneficial than quantizing parameters of all layers to one representation, given that layers behave differently when subjected to quantization. This would allow for parameters of some layers to be quantized more than others ensuring that no layer forms a bottleneck. Many of the works cited above also use a per-layer approach to quantizing neural networks with low bitwidths and high inference accuracy. In using layer-wise quantization, works such as [24] have empirically found the first and last layers of a CNN to be sensitive to quantization. As a result, they choose not to quantize these layers. [4] however finds that using conservative quantization for the first and last layer can still perform sufficiently well with minimal accuracy degradation. Our work takes note of this and also finds the first layer to be quite sensitive to quantization, therefore taking a more conservative approach for this layer. Our approach to dealing with quantization for each layer in the network is covered in more detail in Section V-B. [14] finds that simple post-training quantization can provide sufficiently quantized models with almost no accuracy losses (as low as 8-bits). Since we consider arbitrary bitwidths, and perform layer-wise quantization, the quantized models produced by Dependent Optimized Search have bitwidths as low as 2-bits for some parameters ranging and up to 5 or 7 bits for others. For quick deployment of a quantized model, posttraining quantization without re-training is therefore a useful approach to quantize parameters of a given pre-trained model. III. PROBLEM DEFINITION In this section, we first present some preliminaries about the fixed-point number representation and the quantization function used to quantize any given floating-point number to fixed-point. Following this, we precisely formulate our intended goal for this paper. A. Fixed-point Number Representation Reducing the numerical precision from floating-point to integer precision results in a loss of the representation of fractional numbers. However, this results in an increase in computational speed because of the simplicity of integer operations over floating-point operations. Take for example the multiplication of two integers 3 and 4 as compared to the multiplication of the number π up to a few digits, by itself. The latter would require more precision and the operations would be more complex given the fractional part of the number. [15] presents some experimental results showing that floating-point operations are slower than integer operations. A middle ground between these two formats is the fixedpoint precision. Fixed-point arithmetic is treated as integer precision in hardware but with an added fractional offset that fixes the number of binary digits after the radix. This allows for a limited fractional precision of the number. Since the fractional offset is a fixed value, integer operations can be applied to fixed-point numbers as the fractional offset can be accounted for after the arithmetic integer operations are executed. The fixed-point system is advantageous over the floatingpoint system given that the latter system has different exponents for each number all of which need to be accounted for in each calculation, making calculations more complex and therefore more expensive. With a common exponent for multiple numbers, the fixed-point number system allows for simpler operations as the exponent is then accounted for after the arithmetic operations have occurred. [7] includes an insightful graphic on the difference between how floating-point and fixed-point numbers differ. Fig. 1 illustrates the characteristics of a fixed-point number and some examples of it. We can characterize a fixed-point number using: 1) Bitwidth (BW): The number of bits needed to store the number. The usual bitwidth for numbers in CPUs are 32 bits. In our model of the fixed-point number, the lowest possible bitwidth is 2 bits that consists of a sign bit and either an integer or fractional bit. In our paradigm, a 1 bit fixed-point number only includes the sign bit which we do not allow to be possible. Hence, no numbers can be represented using a 1 bit fixed-point number. 2) Fractional Offset (F): The integer position of the radix for the number. A positive value for F results in higher fractional precision (more precision), while a negative value of F results in storing more integer bits (more range). Fig. 1b illustrates the effect of changing F. The total bitwidth (BW) is therefore the sum of the number of bits allocated for the Sign (S), Integer part (I) and Fractional part (F). The Sign is always represented by 1 bit. The integer part of the number covers the range of numbers that can be represented, while the fractional part defines the precision. We then define a fixed-point representation as (BW, F ). (a) Binary representation of a fixed-point number consisting of a sign bit (S), integer part (I) and fractional part (F) making up the bitwidth (BW). Place value of each bit is provided below the example. (b) Examples of fixed-point representations for a given number. Bits stored in memory are in yellow and blue. Their floating-point equivalent value and corresponding BW and F are given to the right. We also discuss two elements of a binary number that will be useful later, namely the most significant bit (MSB) and least significant bit (LSB). The MSB is the bit with the largest numerical value, found to be the left most bit (excluding the sign). In Fig. 1a the value of the MSB is the bit with the place value of 2 2 . The LSB is the bit with the least numerical value, found to be the right most bit in the bitwidth. The LSB therefore defines the step size and precision, which is the difference between each consecutive binary number (1 bit). In Fig. 1a, the value of the LSB is the bit with place value of 2 −4 . The fixed-point number representation is a data type that is only considered in binary, given that we fix the place of the radix based on the number of bits. However, we may equivalently represent them in floating-point format by summing up their respective place values (base 2) according to the binary number system. As an example, the number in Fig. 1a has an equivalent floating-point value of -5.375. Throughout this paper, we simulate fixed-point binary numbers by considering their floating-point equivalent values. By scaling numbers using base 2, we can effectively execute the same operations on floating-point that would be applied to the binary numbers with no differences in the final result. Although the most common method of representing binary numbers in hardware is with the use of two's complement, we choose to represent our signed fixed-point binary numbers by using an explicit sign bit. This has the undesirable consequence of a possible representation of positive and negative 0. However, since our work is in simulation using a high level language like Python, this is not a problem because Python treats positive and negative zero as zero. B. Quantization function We quantize floating-point numbers to fixed-point by mapping floating-point numbers to their fixed-point equivalent representations in floating-point. A given floating-point number is quantized by first choosing the bitwidth and fractional offset (BW, F ) of the target fixed-point representation that the number must be quantized to. We generalize this by defining a function Q(x) that quantizes a group of floating-point numbers x to their fixed-point equivalent values specified by the same target representation (BW, F ) while also taking into account our model of the fixed-point number. where t is defined as and R(x) is a function that rounds x to the nearest integer, S is the number of bits for the sign with S = 1 and C(x, a, b) is the clipping function defined as The piece-wise function of t is a guard that ensures that the minimum bitwidth possible is 2 bits, which is a design choice. By applying (1) with choices for (BW, F ) we can observe the effect on the resulting quantized values in Fig. 1b. As an example, quantizing a number -83.5625 to a fixed-point representation (BW = 6, F = 2) using (1) gives a value of -7.75, which if converted to the fixed-point binary format would give the number seen in Example 2 of Fig. 1b. Alternatively, we may also apply the same operations to the binary numbers observed in Fig. 1b, and calculate their floating-point equivalent values as was done for Fig. 1 earlier. The results will be identical. In order to simplify the experimental procedure, we use the Keras framework to work with CNN models. Since fixed-point or integer formats are not supported by Python or Keras, Q(x) allows us to simulate fixed-point quantization on floatingpoint numbers that are equivalent to their binary fixed-point representations. The operations in Q(x) simulate fixed-point quantization due to the scaling of 2 F . The resulting quantized values are essentially the floating-point equivalent values of the fixed-point numbers. The two main operations in Q(x) are the rounding and clipping functions R(x) and C(x, a, b) respectively. 1) Rounding: By rounding the result of x·2 F to the nearest integer, we define the LSB 1 LSB = 1 2 F and therefore the numerical precision. Since the LSB defines the step size, an increment of 1 bit in binary is the equivalent of an increase in 1 2 F in decimal. The value of F determines the precision of the binary fixed-point representation as changing F affects the value of the LSB and therefore the magnitude of the step size of 1 bit. To retain high precision, we therefore need a high value of F as that results in an extremely small step size between consecutive numbers. Conversely, to reduce precision we may decrease the value of F therefore increasing the step size between consecutive numbers, making the steps more discrete. Fig. 1b visually illustrates the effect of changing F as moving the set of bits that can be stored in memory (denoted in yellow and blue). For example, the representation of (6, 2) has the smallest increment defined by LSB = 1 2 2 = 0.25, implying that a 1 bit increment is an increment of 0.25 in decimal. The representation (6, −2) however (lower value of F) has the smallest increment defined by LSB = 4, similarly implying that a 1 bit increment is an increment of 4 in decimal. The latter is lower in precision as the former representation covers more intermediate values. Decreasing F below a certain threshold F min would result in too low of a precision to represent the original number as the value of the LSB increases more than the value of the original number itself. This would automatically result in large differences between the quantized value and the original value and therefore result in high quantization error due to rounding. 2) Clipping: Clipping occurs when a number is too large to be stored, given the range that the fixed-point number system allows for. In such a case, values outside this range are restricted to the maximum range of the fixed-point arithmetic system. With reference to our binary fixed-point number, this threshold is determined by the capacity of our bitwidth, namely the maximum absolute value that can be stored within the bitwidth. For instance, with a bitwidth of 3, the maximum absolute value that can be stored is a value of 2 BW −S − 1 = 2 3−1 − 1 = 3. All values larger than 3 or lower than -3 will therefore be restricted to those values respectively. For a fixed value of F, with F > F min , reducing BW effectively reduces the range of values that can be stored because bits are removed from the left of the bitwidth. In order to avoid clipping, the bitwidth must account for the largest possible value in the given distribution of numbers, which occurs when we choose BW ≥ BW min where |2 BWmin−1 − 1| = max(|R(x · 2 F )|) with reference to (1), (2). As the bitwidth is reduced such that BW < BW min , we find that |2 BW −1 − 1| < max(|R(x · 2 F )|) and values We also effectively reduce the most significant bit (MSB) by reducing the bitwidth where 1 M SB Conversely for a fixed BW , clipping can also occur by increasing the value of F in R(x · 2 F ) such that |R(x · 2 F )| > |2 BW −1 − 1|. Mathematically, values are scaled to be larger than what can be stored in the bitwidth therefore resulting in clipping. Visually in Fig. 1b, increasing F simply results in shifting the bitwidth window further to the right therefore changing the value of the MSB and decreasing it. This effect can be observed when increasing F from -2 to 2 in Fig. 1b as a value of -84 reduces to -7.75. We can note here that for no clipping to occur, for a fixed bitwidth BW = 6, F = F max = −2 in order to ensure that no clipping occurs. The MSB plays an important role in the quantization error induced, since the MSB has the largest numerical value in the binary number. 3) Range of values represented by (BW, F ): A given fixedpoint representation (BW, F ) can represent a range of values (−m, m) with a step size of LSB = 1 2 F , where m is given as Therefore, we can directly relate our choice of fixed-point representation with the distribution of the parameters we are trying to quantize. For example, using a fixed-point representation (5, 7) will allow us to represent a range of values of (−0.1171875, 0.1171875) with a step size of 1 LSB = 1 2 7 . This will prove to be useful when trying to understand the range of values that a given fixed-point representation can cover. C. Problem Formalization We can formalize our intended goal by using the quantization function given in (1). We characterize a floating-point precision pre-trained CNN model with L layers by the parameters p : {W, B, A} representing weights, biases and activations of the model respectively, where for each convolutional or dense layer l in the network with a kernel size of w l × h l with c l channels and with output activations being of size x × y, we have The inference accuracy of the network is defined as a(W, B, A). Using (1) we may then quantize the parameters of the network to their respective fixed-point represen- The inference accuracy of the quantized network would then be a(Q(W), Q(B), Q(A)). After quantizing all the parameters of the network using fixed-point representations representations (BW, F ) l,p for the respective parameters of each layer, we can evaluate the loss in inference accuracy using We aim to therefore find optimal fixed-point representations {{(BW, F ) * l } W , {(BW, F ) * l } B , {(BW, F ) * l } A | l = 1, · · · , L} subject to the following three constraints concerning inference accuracy loss, the memory consumed by the network and cost of multiplications respectively: where is the acceptable loss in inference accuracy after quantizing all weights, biases and activations of the network, a value defined by the user and n(x) returns the number of values in x. For the constraint on memory consumption, we simply sum across parameters of all layers, the number of bits for parameters of each layer. For the cost of multiplications in bits, we represent this as the sum across all layers, of the multiplication between the number of bits for the weights of the layer and the number of bits for activations of the layer. Our goal is to minimize these two constraints. The aforementioned constraints simply translate to trying to find the lowest possible bitwidths for all parameters in our network, such that the accuracy loss is within acceptable bounds. Section VI-C1 will discuss how values of the inference accuracy loss are chosen. Our primary objective can therefore be formulated with the following research question: How can we find an optimal set of fixed-point representations (BW, F ) * l,p for parameters p ∈ {W, B, A} of each layer l = 1, · · · , L in a pre-trained Convolutional Neural Network such that we have the smallest possible bitwidths with an acceptable loss in the network's inference accuracy? Along with our primary research question, we aim to answer whether quantization of a certain parameter p in layer l is dependent on the quantization of some other parameter in the CNN. We may formulate it as follows: Are the optimal fixed-point representations (BW, F ) * l,p for parameter p of layer l either dependent on fixed-point representations (BW, F ) * k,p for k = l or dependent on fixed-point representations (BW, F ) * l,q for q = p for all layers l and k? IV. INITIAL ANALYSIS In order to design a method for finding optimal fixedpoint representations of all parameters in the network, we first performed some experiments to investigate the relationship between the fixed-point representation (BW, F ) and the accuracy loss of the low-precision CNN. We start by presenting our pipeline for the post-training quantization process. We then discuss two ways of evaluating the accuracy loss of the network after quantization. Using the pipeline, we then perform brute-force analysis on the fixedpoint representation (BW, F ) to quantize a set of parameters Fig. 2: A visualization of our post-training quantization and inference pipeline used to quantize parameters of each layer k of a CNN. Each layer k has parameters (W k , B k , A k ). Here BW k,p and F k,p are input parameters to our pipeline and the inference accuracy loss of the quantized network ∆a is the output parameter of the network to observe the effect on the inference accuracy of the network ∆a. These experiments will be the basis for the design of our method in Section V-A. A. Post-training quantization process Adapt the CNN model again by replacing p l by Q(p l ) (BW,F ) l,p // Post-training quantization process ∆a l,p ← ∆a D l,p // Dependent quantization inference accuracy loss end Keras) after layer k. For convolutional layers, this layer is appended after the activation function (ReLU), while for the last dense layer, the layer is appended before the activation function (Softmax). 4) We may then optionally feed the same quantized CNN back into the pipeline and repeat steps 2 and 3 to quantize other parameters in the CNN. 5) We run inference on our quantized model with the original test data and measure the loss in inference accuracy lost ∆a. We discuss this further in the following subsection. Fig. 3 shows the result of quantizing the weights and activations of a layer in a CNN to fixed-point representations of (3, 4) and (5, 2) respectively. Note the sparsity that is induced in the respective distribution due to the step size that was defined by their LSB values. B. Evaluation of the Inference Accuracy Loss From the post-training quantization pipeline designed in Section IV-A, we note that the inference accuracy lost ∆a can be evaluated in two ways that we discuss below. Algorithm 1 illustrates the process for the two types of evaluation. The algorithm takes as input parameter values p ∈ {W, B, A} for layer l, a fixed-point representation for quantization (BW, F ) l,p , a boolean flag that we will discuss and a set of fixed-point representations for the quantization of other parameters in the network that are not p. The two types of quantization follow directly from the post-training quantization process described in Section IV-A. 1) Independent Quantization: By setting the boolean flag in EvalAccLossCNN IND = True we define independent quantization as quantizing a parameter of a layer p l to a fixed-point representation (BW, F ) l,p while keeping all other parameters in the network at floating-point precision. We define it as independent quantization as we quantize p l independent of how other parameters in the network are quantized. The inference accuracy loss ∆a is then measured for the quantization of p l which we express as ∆a I l,p . We will use the superscript I to refer to Independent Quantization. In this way, effects of quantization on other parameters are effectively ignored as they are left at full precision in the evaluation of the inference accuracy loss. Therefore the quantization errors because of the other quantized parameters are not taken into account during the evaluation of ∆a I l,p . 2) Dependent Quantization: By setting the boolean flag in EvalAccLossCNN IND = False, to quantize parameters p l to fixed-point representation (BW, F ) l,p , we perform dependent quantization by also quantizing other parameters in the network q = p to their fixed-point representations (BW, F ) l,q (Step 4 of the Post-training quantization process). Lines 6-13 in Algorithm 1 show how dependent quantization is performed. We first take all parameters q = p for layers k (1 ≤ k ≤ L) and replace them by the quantized values Q(q k ) (BW,F ) * k,q using their fixed-point representations (BW, F ) k,q stored in r Q . We then perform a similar task by adapting the same CNN and replacing its parameters p l of layer l by Q(p l ) (BW,F ) l,p given the fixed-point representation (BW, F ) l,p . We can then measure the loss in inference accuracy ∆a = ∆a D l,p for the quantization of parameters p l and parameters q k . We will use the superscript D to refer to dependent quantization. Unlike independent quantization, dependent quantization (a) Brute-force of (BW, F ) for quantization of weights of a layer. All other biases and activations at full precision (b) Brute-force of (BW, F ) for quantization of activations of a layer. All other weights and biases at full precision Fig. 4: Heatmap of the inference accuracy loss ∆a l,p when using grid search on BW and F to quantize weights or activations of a layer of a simple 5-layer sequential CNN trained on the MNIST dataset. All other parameters were kept at full-precision. now includes the effects of errors because of other quantized parameters in the evaluation of ∆a = ∆a D l,p . With this process, we sequentially quantize a parameters of a network while still keeping the other quantized parameter values. Given the difference in methods of evaluating ∆a where we either do or do not take the other quantized parameters into account, we can expect discrepancies between ∆a I l,p and ∆a D l,p , concretely with ∆a D l,p ≥ ∆a I l,p given that the former takes errors due to quantization of other parameters into account in the accuracy loss. We will show and discuss these discrepancies in more detail through our results. C. Brute Force Analysis With the post-training quantization pipeline and a method of evaluating the inference accuracy loss ∆a we now aim to to observe the effect of changing the bitwidth and fractional offset (BW, F ) for a group of parameters in our network on the inference accuracy. We associate this group of numbers by the parameter values pertaining to a layer in the network p l where 1 ≤ l ≤ L and p ∈ {W, B, A} thereby quantizing parameters of a network layer-wise. We now perform a brute-force analysis of the bitwidth and fractional offset (BW, F ) l,p used to quantize all parameters p and all layers l while measuring the accuracy loss ∆a I l,p to understand the independent effect of quantizing parameter p l without considering quantization effects of other quantized parameters. Hence, other parameters are left at floating-point precision. We can then repeat this process for each parameter in the network for all layers. Fig. 4 shows a heatmap of the resulting experiment with the values of ∆a I l,p for a given range of combinations of (BW, F ) on the second convolutional layer (arbitrary choice) of a 5-layer sequential CNN (4 convolutional and 1 dense layer) trained on the MNIST dataset for weights in Fig. 4a and activations in Fig. 4b. Similar results for other layers collected can be found in Appendix A. We identify two major regions, namely the darker region representing the area with little to no accuracy loss compared to the original network, and the brighter region where the network can no longer make useful predictions. Note the negative values in the center of Fig. 4a indicating a possible improvement in the inference accuracy of the network compared to the original full precision network. The magnitude of this improvement however is quite small. Therefore we reason that this improvement could possibly be within a margin of error of the inference accuracy measurement that might vary across data. Another possibility is that the fractional offset may result in a set of weights around zero to be rounded to zero therefore resulting in some regularization of the network. We do not present the result for biases here because the experimental results showed that, for the most part, quantization of biases did not have a significant impact on the inference accuracy of the network. We will show this through our results in Section VI. The advantage of the plots in Fig. 4 is the fact that effects of rounding and clipping can quickly be observed using a visual representation. However, these plots are expensive to generate considering how long it takes to brute force these grid of values. We discuss the effects of rounding and clipping in detail below using the intuition generated from Section III-B. 1) Rounding and the value of F min : In Section III-B we noted that the fractional offset affects the step size characterized by the LSB as 1 LSB = 1 2 F . Decreasing the fractional offset resulted in sparser distributions of values as each increment of 1 bit results in a large discrete steps. Fig. 4 shows that reducing F too much results in a high loss of numeric precision of the set of parameters that are quantized. With F being too low, the value of the LSB increases and intermediate values are then rounded to extremely sparse and discrete values. These large step sizes result in too many intermediate values being rounded up or down. This has the effect of increasing the quantization error because the values are changed too far from their original value therefore resulting in a high drop in inference accuracy of the network. Fig. 4a shows that F min = 4 for little to no accuracy loss for weights. Fig. 4b on the other hand shows that F min = 0 for activations with an accuracy loss of 0.9%. We can also directly relate the scale of fractional offsets for weights and activations in Fig. 4 to the distribution of weights and activations in Fig. 3. In Fig. 3 since |W l | << 1, we require positive fractional offsets in Fig. 4a as we require more fractional precision. On the other hand, since A l ≥ 0 with many activations that are also greater than 1, we note in Fig. 4b that we may also potentially use negative or null fractional offsets therefore only storing the integer part of the number and losing fractional precision. In this latter case however, given that a majority of the activations in Fig. 3 are less than 1, the effect of rounding to integer precision results in a loss of 4.2% in the inference accuracy. 2) Clipping and the value of BW min and F max : From the discussion on clipping in Section III-B we noted that clipping occurs when either BW < BW min for a fixed F or when F > F max for a fixed BW . Fig. 4 shows this behaviour along the diagonal line, where either increasing F for the same BW or decreasing BW for the same F results in clipping of the maximum and/or minimum values of respective weights or activations. The plot also shows that clipping has a very harsh effect on the inference accuracy as the loss ∆a increases substantially. Fig. 4 shows that the values of F max and BW min can vary depending on the choice of BW and F respectively. This can be explained by considering the fact that for a bitwidth and fractional offset BW ≥ BW min and F ≤ F max , reducing (BW, F ) equally would result in bits being dropped from the right side of the number therefore avoiding the numbers from being subjected to the clipping operation in (1). This can also be observed in Fig. 1b where reducing BW and F by 2 (examples 3, 4) did not result in clipping as we retained the MSB of the original number but that bits were dropped from the right. The numeric value did however increase from -84 to -80 due to the reduction in precision caused by the decrease in F. Mathematically, given that BW and F are raised to the same base of 2 as long as BW ≥ BW min and F ≤ F max , changing BW and F equally will maintain |2 BW −1 − 1| > max(|R(x · 2 F )|) and values will therefore not be clipped. Therefore, we note that in Fig. 4, as long as we reduce BW and F equally while BW ≥ BW min and F ≤ F max , we ensure that no values of the weights or activations are clipped or restricted. We noted in Section III-B that using (3) we can evaluate the range of values a given fixed-point representation can cover. This allows us to directly relate the bitwidth to the distribution of parameters that we want to quantize. For example, using a fixed-point representation (BW, F ) = (4, 7) allows us to represent values within the range of (−m, m) = (−0.0546875, 0.0546875). Comparing this to the original weights in Fig. 3 we observe that quantizing the original weights to this fixed-point representation would automatically result in clipping of the weights distribution to the value of (−m, m) because m << max(|W l |). By observing the parameter distributions, we also noticed that slight clipping has a more significant effect on the accuracy loss of the network than as compared to effects of rounding. This can be explained by the fact that clipping values to those in a lower bitwidth results in much larger quantization errors considering the differences between the original and quantized values. Take for example the equivalent numbers in Fig. 1b (examples 2 and 5) where clipping of the original number to a lower value or bitwidth results in large differences between the original value (-83.5625) and the quantized number. In the same figure, reducing the precision by reducing F results in a much smaller quantization error (examples 3, 4) because the differences between -83.5625 and -84 or -80 are much smaller. As noted earlier, a major reason for this is the value of the most significant bit Fig. 4, we note that the accuracy loss increases as the bitwidth and possibly also the fractional offset are reduced. Fig. 4a shows that for a fixed-point representation of (BW, F ) = (2, 4), the accuracy loss is 1%. Increasing this bitwidth to possibly 4 bits ((BW, F ) = (4, 4)) would result in only a 0.1% accuracy loss. Similarly, for activations in Fig. 4b, using a bitwidth lower than 5 bits results in accuracy losses of at least 0.9%. 3) Conservative vs Harsh Quantization: Generally, in From all our tests of brute force analysis, we noticed that quantizing conservatively (using larger bitwidths) led to lower accuracy drops while quantizing in a harsh manner resulted in larger accuracy losses. The reason for this is that with lower bitwidths, we are restricting the range of parameter values that can be represented therefore throttling information. We therefore require a larger bitwidth to ensure lower accuracy losses. 4) Optimal Fixed-point representation: We noted through the constraints in (7) on memory consumption and multiplication complexity that we require the smallest possible bitwidths to quantize parameters in our network such that the loss in inference accuracy is acceptable. From Fig. 4 we note that we achieve this constraint when both the bitwidth and fractional offset are as low as possible while ensuring that the accuracy loss ∆a ≤ . For instance, in Fig. 4a, with ∆a = 0.001 (loss of 0.1%), we can reduce the bitwidth of the quantized weights to as low as 3 bits with a fixed-point representation of (BW, F ) * l,W = (3, 4). Fig. 4 also show a clear discrepancy between the quantization of weights and activations. By representing all weights for the the given layer l with 3 bits, the inference accuracy of the network drops by 0.1% (∆a I l,W = 0.001). However, quantizing activations to a fixed-point point representation with BW = 3 would result in at least a 1% loss in inference accuracy. In order to retain the original inference accuracy, activations would be required to use a minimum of 5 bits (F = 2). We found a similar discrepancy between the two parameter types for the other models trained on the other 3 datasets. From this we conclude that activations are generally more sensitive to quantization than weights. Empirical data for this may be found in Appendix A. This conclusion was also supported by works in the literature. 5) Weights vs Activations: The plots in We discuss the reason for this discrepancy in Section VI-E. 6) Observations for various layers: Due to readability we only presented the result of brute-force analysis on the weights and activations of one layer in the CNN. Additional results can be found in Appendix A. While the experiment yielded similar plots for other layers as seen in Fig. 4, the major discrepancy was observed for the parameters of layer 1. We generally observed that parameters of layer 1 required higher bitwidths and was therefore more sensitive to the operations of clipping and rounding. The results of our experiments in Section VI will show this more clearly. We also discuss the reasons for these layer-wise discrepancies in Section VI-E. 7) Evaluation of the accuracy loss: For the results in Fig. 4 we performed independent quantization by measuring inference accuracy loss ∆a I l,p as discussed in Section IV-C. By performing a similar brute-force experiment, but instead measuring ∆a D l,p while quantizing other parameters q = p to their respective fixed-point representations (BW, F ) l,q would result in larger accuracy losses for the network. This is because the error due to quantization of other parameters may accumulate through the network. With other parameters quantized, the same fixed-point representation (BW, F ) * l,W = (3, 4) may now result in a larger inference accuracy loss ∆a of the network if we evaluate ∆a D l,p . To compensate, we may therefore be required to increase the bitwidth and fractional offset to retain or lower the loss in inference accuracy. The choice of dependent or independent quantization will therefore have an effect on the optimal fixed-point representation (BW, F ) * l,p , introducing a dependency on other parts of the network. Through our experimentation we noted that ∆a D l,p ≥ ∆a I l,p . The magnitude of the difference between the two depends on how aggressively or conservatively the other parameters are quantized. However, the pattern and shape found in Fig. 4 was consistent for both types of quantization. We discuss our approach to investigating this in more detail in Section V-B. 8) Summary of Conclusions: We now summarize the conclusions drawn from the tests on brute-force analysis briefly: • For a minimum loss in inference accuracy, we need to choose a fixed-point representation (BW, F ) such that F min ≤ F ≤ F max and BW ≥ BW min . Decreasing F below F min would significantly increase the step size characterized by the LSB. This would reduce the precision too much resulting in high quantization errors due to rounding. On the other hand, increasing F above F max or decreasing BW below BW min will result in clipping of the parameter distribution, in turn resulting in a high accuracy loss. • We note that clipping the parameter distribution results in much larger accuracy losses as compared to the operation of rounding. This was because clipping creates much larger errors between the original and the quantized values whereas rounding results in smaller quantization errors. The major reason for this is the role that the value of MSB plays since it has the largest numerical value. Changing the MSB by clipping will have larger effects on the quantization errors. Reducing the precision by reducing F results in smaller quantization errors and therefore smaller accuracy losses. On the other hand, reducing the BW has larger effects on the accuracy losses since we effectively reduce the range of values that can be covered. • The fixed-point representation (BW, F ) can be directly related to the distribution of the parameter we intend to quantize. With the bitwidth BW we can evaluate the range of values that can be covered. With the fractional offset F , we can evaluate the number of bits needed for the fractional part of the number. The value of F could also be negative, which is means that we only store integer bits instead. • Quantizing conservatively requires using larger bitwidths in order to minimize the effects of clipping and rounding on the distribution of the parameter values. Quantizing harshly, thus using lower bitwidths, results in more significant effects of clipping and rounding on the distribution of the parameter values. • The optimal fixed-point representation is one that has the smallest possible bitwidth and fractional offset, for which the accuracy loss of the quantized network is still acceptable. • There are discrepancies in the minimum bitwidths required to represent weights, biases and activations. We discuss the reason for this in Section VI-E. • While the shape of the plot is consistent for parameters of all layers, we observed a slightly different behaviour in layer 1. Concretely, parameters of layer 1 usually require larger bitwidths than parameters of the successive layers. We discuss this in more detail in Section VI-E. • The method of evaluating the accuracy loss will affect our choice of the optimal fixed-point representation. Using dependent quantization (measuring ∆a D l,p ) instead of independent quantization (measuring ∆a I l,p ) will influence our decision of the optimal fixed-point representation possible for the accuracy loss we are willing to accept. We noted that generally, including errors due to quantization of other parameters by evaluating ∆a D l,p will result in ∆a D l,p ≥ ∆a I l,p depending on how the other parameters are quantized. A. Algorithm for Efficient Search We now present the design of our method aimed at finding the optimal fixed-point representations {{(BW, F ) l | l = [1, · · · , L]} p | p = {W, B, A}} for all layers l and all network parameters p of a CNN to meet the constraints in (7). These constraints simply translate to determining the smallest possible bitwidths subject to an accuracy loss ∆a ≤ for the entire network, as simply stated in our primary research question. In Section IV-C4 through Fig. 4, the smallest possible bitwidths occurred for a fixed-point representation (BW, F ) * l,p that was lowest in value while ensuring ∆a l,p ≤ l,p (either independent or dependent) for any given layer l and any given parameter p in the CNN. For the design of the algorithm, we consider the predictable pattern of the accuracy loss ∆a I l,p of the network observed in Fig. 4 and the corresponding conclusions based on this pattern on the bounds of the fixed-point representation (BW, F ) l,p . We noted that as long as BW ≥ BW min and F min ≤ F ≤ F max the accuracy loss ∆a I l,p after quantizing parameters of the CNN is minimal. We use these bounds and this behaviour to design our main method. Algorithm 2 presents our method OptSearchCNN for finding optimal fixed-point representations (BW, F ) l,p for all parameters p ∈ {W, B, A} for all layers l in a given CNN. OptSearchCNN requires the following inputs: • Acceptable Inference Accuracy Loss ( l,p ): The stopping condition for the algorithm to find the optimal fixed-point representation (BW, F ) * l,p for quantization of parameter p of layer l. • Initial Bitwidth (BW 0 l,p ): A starting bitwidth from which the algorithm will work towards the smallest bitwidth. This needs to be sufficiently high. From our experimentation, 8-12 bits has sufficed. This value may however vary. • Boolean flag (IND): A boolean variable used to determine whether the inference accuracy loss evaluated after quantizing the parameter of a layer p l must be computed using Independent or Dependent quantization. It is set to True for independent quantization. Based on the evaluation of ∆a we either perform independent or dependent optimized search. In Section IV-C7 we noted that the optimal fixed-point representation (BW, F ) * l,p found for parameter values p l will depend on how the inference accuracy loss is evaluated, either independent of or dependent on other quantized parameters. Given that we compare the inference accuracy loss ∆a l,p at every stage with the acceptable loss l,p our method of evaluating this loss is important. The flag IND as inputs to OptSearchCNN and EvalAccLossCNN decide this. In order to store the optimal fixed-point representations collected during the execution of the algorithm, we use a set r Q to store this. We now briefly summarize the main steps of OptSearchCNN in Algorithm 2: 1) By the order of the for loops in OptSearchCNN, we repeat the following process over all layers, one parameter at a time. 2) Start with a sufficiently large initial bitwidth BW 0 l,p (8-12 bits suffices) and calculate the corresponding fractional offset F 0 l,p such that we avoid clipping the maximum and minimum parameter values p l . To avoid clipping, we simply assign as many of the bits in our bitwidth to the integer part of the number such that we include the largest absolute valued number in the distribution. Therefore the number of integer bits I = log 2 (max(|p l |)) . F 0 l,p is then given as where S is the number of bits for the sign, S = 1. We then evaluate the accuracy loss for this representation (BW, F ). (Lines 4-5) 3) Traverse diagonally with respect to (BW, F ) l,p upwards by reducing the two values by 1 equally in Fig. 4 until the ∆a l,p > l,p . The resulting representation (BW ,F ) l,p is the lowest (BW, F ) collected for which the accuracy loss is acceptable. (Lines 8-13) 4) Attempt to reduce the bitwidth further while keeping the fractional offset F constant. The resulting fixedpoint representation (BW ,F ) l,p optimal result from our diagonal search with an accuracy loss ∆â l,p ≤ l,p . (Lines 14-20) 5) With (BW ,F ) l,p , we now also look at neighbouring fixed-point representations with respect to Fig. 4 A successful termination of this algorithm will return all optimal fixed-point representations for the parameters p for all layers l in the CNN, represented by r Q : {{(BW, F ) * l | l = [1, · · · , L]} p | p = {W, B, A}}. With these fixed-point representations, we may then quantize the respective parameters giving quantized parameters (5) for which the inference accuracy is a(Q(W), Q(B), Q(A)) and the accuracy loss as seen in (7) is ∆a ≤ . The fixed-point representations in r Q should accordingly satisfy the constraints of (7). The algorithm may also terminate unsuccessfully at the following lines of OptSearchCNN: • Line 5-8: The accuracy loss evaluated there ∆a l,p is higher than the acceptable loss l,p . • Line 11: The while loop terminates early because the accuracy loss for the fixed-point representation evaluated does not give any representations for which the accuracy loss is acceptable. We may then have to either use a larger initial bitwidth BW 0 l,p , potentially increasing it to 16 bits, or increase the acceptable accuracy loss l,p . This results in a trade-off that needs to be made between inference accuracy loss and the lower memory consumption that can be achieved with the resulting bitwidth. With respect to time complexity, execution of OptSearchCNN is bounded as we use finite loops. The execution time of each instance of the loop depends on the starting bitwidth and will in the worst case terminate with a minimum possible bitwidth of 1 bit, representing no bits remaining. Additionally, this execution time also depends on the time required for running inference and the time required to adapt the CNN model. Given that we tested our algorithm on simple models with four simple datasets as discussed in Section VI-A, the inference time of the entire test set was fairly short. For the datasets MNIST, Fashion-MNIST and CIFAR10, the time for inference on the two types of architectures was roughly 3s and 7s respectively. Inference on the SVHN dataset took the longest amount of time (7s and 15s for the two types of architectures) since it had many more test examples. The algorithm was executed on a CPU, while inference was executed on a GPU. We present the architectures used for experimentation in the following section. B. Investigating Dependence of Quantization In Section III-C, we also expressed the need to investigate whether the optimal fixed-point representations (BW, F ) * l,p for parameters of layers p l was dependent on the optimal fixed-point representation (BW, F ) * l,q of another parameter in the network q = p. By defining our method of evaluating the accuracy loss ∆a l,p independently or dependently in EvalAccLossCNN (Algorithm 1), we defined a method to partially address this question. We will now define our method of addressing these questions more concretely. OptSearchCNN finds optimal fixed-point representations with minimal bitwidths subject to having an accuracy loss ∆a l,p ≤ l,p . The evaluation of ∆a, as mentioned earlier in the paper will affect the bitwidths that the algorithm is able to find. 1) Independent Optimized Search: We noted in Section IV-B that independent quantization ignored effects of quantization on other parameters by the evaluation of ∆a I l,p . We can perform independent optimized search by evaluating ∆a I l,p and setting IND = True. Independent optimized search will then search for fixed-point representations with minimum bitwidths by evaluating if ∆a = ∆a I l,p ≤ l,p . Therefore, the choice of optimal fixed-point representations for parameters of each layer will be independent of that of other parameters. In such a case the order in which parameters of the network are quantized should not matter. We will refer to this version of optimized search as independent optimized search. 2) Dependent Optimized Search: On the other hand, dependent quantization does take effects of quantization of other parameters q = p into account in the evaluation of ∆a D l,p in EvalAccLossCNN. We can perform dependent optimized search by evaluating ∆a D l,p setting IND = False. The optimal (BW, F ) * l,p to quantize parameter p l will then depend on how the other parameters have been quantized up to that point. In such a case, we aim to investigate whether parameters must be quantized in a specific order for the resulting fixed-point representations to either provide lower memory consumption or potentially a lower loss in inference accuracy. We consider the following two cases: 1) Layers: Quantizing parameter p l either forwards (layers l = 1, · · · , L) or reverse (layers l = L, · · · , 1). We may also look at bi-directional or other (pseudo-)random orders across layers, however we limit our scope to the two orders only. 2) Parameters: Order in which we choose to quantize weights, biases and activations. We note 6 different ways to order the quantization of these three parameter types. Given the dependency on how other parameters are quantized in the CNN, we perform dependent optimized search in a more controlled manner. Concretely, we aim to find optimal fixed-point representations with minimal bitwidths such that the accuracy loss ∆a D l,p is controlled by our choice of D l,p at every stage of the execution of OptSearchCNN. This is done by setting an allocation scheme for the acceptable loss in inference accuracy D l,p . Controlling D l,p as the network is quantized will allow dependent optimized search to quantize the respective parameters conservatively or harshly depending on how the network has been quantized up until that point. A larger value of D l,p will result in harsher quantization with lower bitwidths while lower values will result in conservative quantization and larger bitwidths. We discuss our allocation schemes in Section VI-C1. By controlling the acceptable accuracy loss as parameters in the network are quantized, we can therefore expect a variance in the bitwidths that dependent optimized search is able to find. We will discuss these layer-wise discrepancies in more detail in Section VI-C. VI. EXPERIMENTS AND EVALUATION We now test the implementation of OptSearchCNN with independent or dependent EvalAccLossCNN. We first present the CNNs and datasets used for testing OptSearchCNN. We then present the resulting accuracy loss, bitwidths and memory consumption of the quantized models resulting from the optimizations of fixedpoint representations done by Independent and Dependent OptSearchCNN. Finally, we choose the better of the two types of OptSearchCNN and compare it against a baseline approach that is commonly used by commercial tools. For readability and to keep the section concise, we only present a small subset of our results. However the conclusions and observations are based on experiments from tests on all models and datasets mentioned above. Additional empirical data are provided in Appendix B-D. For our results, we only present the values of the accuracy losses and the corresponding bitwidths that were found by OptSearchCNN for quantizing a given CNN. We do not present the values of the fractional offsets from the optimal fixed-point representations (BW, F ) l,p given by OptSearchCNN as they are not important for our constraints in (7). A. Experimental Setup For our experiments, we trained two CNN architectures on four different datasets (MNIST handwritten digits, CIFAR10, Fashion-MNIST and Street View House Numbers (SVHN)). OptSearchCNN then extracted the parameters of these models and optimized the fixed-point representations to obtain a quantized network. We use the following two architectures, 13 (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. 1) Sequential Model: For a simple architecture, we consider a single-branch sequential model with Convolutional layers, Max or Average Pooling and Batch Normalization. For some cases, we also included Dropout layers during training to ensure that the models did not over-fit the data. It consists of 14 convolutional layers with 1 dense layer for classification. The architecture has roughly 327k parameters. 2) Branched Model: For a branched architecture we use the structure of the Inception model by Google [21]. The architecture has roughly 194k parameters and consists of 22 convolutional layers with 1 dense layer for classification. A visual illustration of the architectures are provided in Appendix E. We also used Batch Normalization layers in all our models. We found that using Batch Normalization layers aided in keeping the distribution of weights of the CNN normalized to have equal magnitude and thus being less spread out, covering a smaller range of values. This allows us to use smaller bitwidths to cover the range of weight values with sufficient resolution. The activation function used for all convolutional layers was the ReLU function, while classification in the dense layer at the end is done using Softmax. With respect to the quantization pipeline in Fig. 2, we quantize weights and/or biases and plug them back into the model. For activations, we add an additional layer that includes the quantization function Q(x) (BW,F ) l,p . In Keras this is achieved by using a Lambda layer that passes all activations through Q(x) (BW,F ) l,p . This layer is added after activations for convolutional layers, and before activations for the Dense layer. In addition, we limit the quantization process only to convolutional and dense layers and leave the other layers untouched. B. Independent Optimized Search For independent OptSearchCNN, we instantiated Algorithm 2 with an acceptable accuracy loss for each parameter in each layer p l to be l,p = I l,p = 0.003 (0.3%). Therefore OptSearchCNN will find fixed-point representations for all layers l and all parameters p such that the loss when quantizing that parameter is ∆a I l,p ≤ I l,p . We specify an initial bitwidth BW 0 l,p = 8 and set the boolean flag IND = True. After successful termination of independent OptSearchCNN, we get optimal fixed-point representations r Q = r I Q , where each (BW, F ) * l,p is determined independently of the optimal (BW, F ) * l,q for other parameters q = p, i.e. with other parameters at full precision. Fig. 5a shows the accuracy losses ∆a I l,p measured when quantizing p l to the fixed-point representation (BW, F ) * l,p stored in r I Q using independent quantization for a 15layer sequential CNN pre-trained on MNIST. Independent OptSearchCNN successfully terminates with ∆a I l,p ≤ l,p = 0.003 (0.3%) for all layers l and parameters p and with bitwidths as low as low as 2 bits for weights, 1 bit for biases (pruned away), and 3 bits for activations as seen in Fig. 6. Given these optimal fixed-point representations, we can now evaluate whether the accuracy loss of the network ∆a is comparable to the individual accuracy losses ∆a I l,p . We can execute the post-training quantization pipeline in Fig. 2 to sequentially quantize parameters of the network to their fixed-point representations collected in r I Q returned by OptSearchCNN and evaluate ∆a every time parameter values are quantized. Weights, biases, and activations from layers 1 to L are quantized in the respective order. In such a case, unlike Independent quantization, the preceding parameters still remain quantized as we feed the fixed-point network back into our pipeline (Fig. 2). Fig. 5b shows that as the respective parameters are sequentially quantized from layers 1 to L, the ∆a of the quantized network grows quite significantly, rising higher than l,p = 0.003(0.3%) that was set for each parameter p l of layer l. After quantizing all parameters, we note a final inference accuracy loss of approximately 50%. Although too much of the accuracy of the network is lost by only quantizing the weights (roughly 12%), the accuracy loss of the network increases even more dramatically by the time most of the activations are quantized. We also observe that the accuracy loss grows more rapidly when sequentially quantizing the activations as compared to weights. We reversed the order in which these parameters were quantized, namely trying activations, biases and weights instead and found that this rapid accuracy loss increase was now observed for weights instead of activations. From this we concluded that the rate at which this accuracy loss increases is not due to any one of the parameters that are being quantized. Instead, this is because once most of the layers of the third parameter type are quantized, too much of the information is lost. Therefore, quantizing the parameters of the network further results in a more significant increase in accuracy loss. By ignoring the error due to other quantized parameters in the network during the evaluation of independent ∆a I l,p (EvalAccLossCNN) to find the optimal fixed-point representations r I Q , independent OptSearchCNN is able to quantize the respective parameters p l quite harshly with extremely low bitwidths. Ignoring the effects of quantization on other parameters in the network however clearly does not work given that in Fig. 5b we note a much larger accuracy loss of the entire quantized network. The error induced due to the quantized parameters propagates through the network and not taking that into account in the evaluation of ∆a l,p when trying to find the optimal (BW, F ) * l,p does not give promising results. The exact relationship of this error is still unknown and requires further experimentation and study. Works in literature have investigated this by looking at the problem from the perspective of noise that is introduced when each of the parameters are changed and have found ways to minimize this noise while achieving a quantized network, one instance being [16]. We note that the relationship between the individual accuracy losses ∆a l,p and the accuracy loss ∆a of the quantized network is not simply linear or additive in nature. From all our experiments, activations were more sensitive to quantization than weights or biases as they required equal if not larger bitwidths as observed in Fig. 6. We discuss the reason for this in Section VI-E. As observed in Fig. 5a, quantized activations for the most part had larger effects on the accuracy losses of the network than did quantized weights. We hypothesize that this may have to do with the more direct effect that activations have as compared to weights, on the feature maps created by the following layer. However, this discrepancy would require further study to concretely point out why quantization of activations have larger accuracy losses than do the quantization of weights. By observing the accuracy losses in Fig. 5a, 5b and the corresponding bitwidths in Fig. 6 with additional results in Appendix B, we note that biases were often pruned resulting in insignificant accuracy losses. From this we conclude that the biases had the least effect on the accuracy of the network. We further discuss the discrepancy in quantization behaviour between the three parameter types in Section VI-E. C. Dependent Optimized Search As discussed in Section V-B, we perform dependent OptSearchCNN with the assumption that the optimal (BW, F ) * l,p for parameter p l is dependent on how other parameters of the network have already been quantized to their optimal fixed-point representations up until that point. We now discuss schemes to control the accuracy loss of the CNN to perform dependent OptSearchCNN and based on the results choose the best scheme for the final method. 1) Acceptable accuracy loss allocation: We discussed controlling the allocation of D l,p to ensure acceptable losses in inference accuracy during and after the execution of dependent OptSearchCNN. We may think of this as the allocation of a budget for acceptable loss when quantizing each parameter in the network, for which OptSearchCNN must find optimal (BW, F ) * l,p . For layer l and parameter p during the execution of dependent OptSearchCNN, we can allocate an acceptable accuracy loss using · f (l) (9) where D L,p is the final accuracy drop after quantizing all L layers of parameter p and the function f (x) is a scheme we use to allocate the accuracy loss discussed below. We experiment with 5 different schemes of allocating the acceptable accuracy loss as shown in Fig. 7 for multiple layers for just one parameter (l = 1, · · · , L, p ∈ {W, B, A}): 1) Constant(f (x) = 1): D l,p is kept constant throughout the execution of OptSearchCNN. This allocation scheme spends the entire budget at the start of the algorithm. 2) Log(f (x) = ln(x)): D l,p is increased using the natural logarithm. Most of the budget is allocated early, however this allocation rate is decreased later on. 3) Linear(f (x) = x) Linear increase of D l,p resulting in steady allocation. Note that we also accept some loss when quantizing the parameter of the first layer with this approach. 4) Quadratic(f (x) = x 2 ): Quadratic increase in D l,p . This approach is more conservative with allocating the budget. 5) Exponential(f (x) = 2 x ): Exponential increase in D l,p with base 2, using extremely conservative allocation initially. We also experimented these allocation schemes by reversing the order, that is l = L, · · · , 1 and found no significant results. We therefore only present our results for l = 1, · · · , L. OptSearchCNN is initialized with the respective D l,p increasing up to a maximum of D L,p = 0.005 (0.5%) for parameter p, an initial bitwidth BW 0 l,p of 12 bits and IND = False. After successful termination of dependent OptSearchCNN with the five different methods of allocating D L,p , OptSearchCNN returns optimal fixed-point representations Fig. 8: accuracy loss ∆a D l,W after quantizing weights of all layers to their optimal (BW, F ) * l,W found using Dependent Optimized Search for different acceptable loss allocation schemes. Weights of the preceding layer are also quantized to (BW, F ) * k,W for k < l. Fig. 9: Optimal bitwidths BW * l,W of weights of each layer found using Dependent Optimized Search for different acceptable loss allocation schemes on a 15 layer Sequential CNN (trained on MNIST). Weights of the preceding layer are also quantized to (BW, F ) * k,W for k < l. r Q = r D Q , where each (BW, F ) * l,p is found depending on how preceding parameters have been quantized up until that point. To keep it concise, we present and discuss our results for executing dependent optimized search on the weights of one CNN while keeping activations at floating-point precision. We also investigate this on activations while keeping weights at floating-point precision, results for which can be found in Appendix C. The observations and conclusions are based on tests on all parameters of all the sequential models for the four datasets presented in Section VI-A, the results for which can be found in Appendix C. a time. For all allocation schemes, OptSearchCNN is able to find optimal fixed-point representations for weights of all layers such that the accuracy loss is below the maximum acceptable accuracy loss of 0.5%. Fig. 9 shows the optimal bitwidths for the quantized weights of each layer returned by dependent OptSearchCNN for the five different schemes of allocation, and also the respective memory consumption of the weights in Fig. 10. We now discuss our observations and conclusions for dependent optimized search based on the results collected for both weights and activations, here and in Appendix C. We group these based on the acceptable loss allocation schemes. • Greedy Allocation: The constant allocation scheme proves to be quite greedy as it results in extremely harsh quantization with low bitwidths for all layers therefore resulting in low memory consumption as observed in Fig. 10, but this automatically resulted in higher accuracy losses ∆a D l,p as compared to the other schemes as observed in Fig. 8. Although this measured loss is below what is acceptable (< 0.5%), it essentially uses up most of the budget quite early in the execution of OptSearchCNN. For one of the cases, this scheme resulted in dependent OptSearchCNN using up too much of the budget early and quantizing initial layers too harshly, therefore being unable to find sufficiently low bitwidths for parameters of successive layers. On the other hand, the log allocation for D l,p is also on the greedier side as it allocates most of the budget quite early while decreasing the rate of this allocation later. From our experiments, it resulted in similar if not larger accuracy losses than the Linear approach. However it usually performed better than the constant case as D l,p was allocated in a more controlled manner. The memory consumption and therefore the respective bitwidths were also comparable to the constant and linear schemes. • Linear Allocation: The linear scheme is a simple one where the allocated acceptable accuracy loss is steadily increased for parameters of each layer. This generally resulted in lower accuracy losses than the constant and log cases while still achieving sufficiently low accuracy losses and low bitwidths as observed in Fig. 8, 9 and from the other results in Appendix C. Throughout our experiments, the linear allocation scheme mostly resulted in the lowest memory consumption for both weights and activations while having acceptable losses in inference accuracy. The simplicity of this allocation scheme proves to be quite useful for finding optimal fixed-point representations. • Conservative Allocation The two conservative allocation schemes for acceptable accuracy loss, namely quadratic and exponential require dependent OptSearchCNN to find minimum bitwidth fixed-point representations while accepting almost no loss in inference accuracy for most of the parameters of the network as observed in Fig. 7. This often resulted in extremely low measured accuracy losses ∆a D l,p for the entire quantized network as can be seen in Fig. 8, ending with most of the acceptable budget still being available for further quantization. However, this automatically causes OptSearchCNN to find much larger bitwidths for parameters of each layer, as compared to the other allocation schemes resulting in a higher memory usage. Clearly, this is not optimal given that we can still quantize more aggressively, thus lowering the bitwidths of parameters of some layers, and accept a higher accuracy loss than what is currently measured. 2) Optimal Allocation Scheme for Acceptable Accuracy Loss: From our experiments on all the models for dependent optimized search as presented above, we find that the linear allocation of acceptable accuracy loss D l,p for dependent optimized search performs the best. Concretely, we note that the linear scheme ensures acceptable losses in inference accuracy and sufficiently low bitwidths thereby resulting in lower memory consumption for both weights and activations. We note that the linear scheme provides a reasonable balance between greedy and conservative quantization as the accuracy losses are acceptable and bitwidths found are the lowest. Additional empirical data to support this conclusion can be found in Appendix C. The final inference accuracy lost after quantizing all L layers ∆a D L,p is well below the maximum D L,p = 0.005 with a small amount of the budget left to spare. From the observations and conclusions of Section VI-B we noted that that the biases do not contribute much to the accuracy losses of the network. Therefore, we decided to use the constant allocation scheme for acceptable accuracy loss for quantization of biases of the CNN, with a value of D l,B = D L,W . Additionally, it helps to note that the linear approach is simple and straight forward. D. Linear Allocation Dependent Optimized Search From the results gathered on the allocation scheme for acceptable loss D l,p in inference accuracy when performing dependent optimized search, we noted that linear allocation (a) Acceptable and measured accuracy loss ∆a D l,p after sequentially quantizing the parameters p l in the order (W → B → A) from layers 1 to L. (b) Optimal bitwidths BW * l,p of parameters of each layer. Fig. 11: Results for accuracy loss and the optimal bitwidths of the quantized CNN resulting from dependent optimized search using a linear acceptable loss allocation scheme on a 15-layer sequential CNN trained on MNIST. worked best, giving sufficiently low accuracy losses for the resulting quantized network with low bitwidths and therefore a low memory consumption. We now use this allocation scheme as part of our algorithm to quantize parameters of a 32-bit floating-point full precision CNN. We then evaluate the quantized network against the constraints in (7) and based on the given metrics, compare it to a simple baseline approach that is used by a lot of commercial tools. We first present the baseline approach, then present and discuss the results to compare it against our approach. 1) Baseline: A simple approach to post-training quantization that is used by a number of commercial tools is that of quantizing the entire network to the same fixed-point representation. Given that hardware is restricted to 4, 8, 16, 32 bits, quantizing an entire network to the same representation is usually limited to 8 bits as lower bitwidths result in larger inference accuracy degradation. For the fractional offset, the tools find a fractional offset given the bitwidth such that none of the parameter distributions are subjected to clipping. We used this approach as a starting point for our algorithm in OptSearchCNN as defined in Step 2 in Section V-A. Concretely, we find a fractional offset F for a given bitwidth BW by assigning as many of the available integer bits in the bitwidth to the largest absolute valued number in the parameter distribution. This ensures that the range of values is covered avoiding any possibilities of clipping. The remaining bits are then assigned to the fractional part F as is defined in (8). For our baseline, we will use the fixed-bitwidth approach and quantize weights, biases and activations for all layers to a fixed-point representation with a bitwidth of 8 bits. 2) Final method: We now put together our final method based on the results collected earlier. From Section VI-C we concluded that using a linear allocation scheme for the acceptable accuracy loss D l,p using dependent OptSearchCNN resulted in a sufficiently quantized network with bitwidths as low as 2 bits with an accuracy loss of the quantized network that was acceptable. Having tested this on weights and activations independently, keeping the other parameter at floating-point precision, we found that this method worked well in both cases. We now aim to quantize all three parameter types of all layers to sequentially obtain a CNN with all parameters quantized to their respective optimal fixed-point representations such that the final accuracy loss of the entirely quantized model is still acceptable, as is required by our constraints in (7). For the acceptable accuracy loss , we specify an acceptable loss of = 0.01 (1%) for the final quantized network. We use the first half of this budget for quantization of the weights of all layers L,W = 0.005 and the second half (0.005 < l,A ≤ 0.01 for all 1 ≤ l ≤ L) for quantization of all activations of all layers. Therefore after termination of OptSearchCNN, L,A = = 0.01 and we need that ∆a L,A ≤ L,A = . Using the linear allocation scheme defined earlier, we then specify the intermediate acceptable accuracy losses for weights and activations l,W , l,A as was done previously. For biases however, we use a constant allocation scheme given that our experiments showed quantized biases to have a minimal effect on the accuracy loss. Additionally, in Fig. 8 we noted that the measured accuracy loss ∆a D L,W < D L,W after quantizing weights of all L layers, and so the remainder of the budget should suffice for the biases. Hence, we specify l,B = L,W = 0.005 for all 1 ≤ l ≤ L. We now execute dependent OptSearchCNN by ordering the for loops on lines 2, 3 as (W → B → A) from layers 1 to L. We use an initial bitwidth BW 0 l,p = 12 for all layers l Fig. 11 shows the result of dependent OptSearchCNN on a 15 layer sequential CNN trained on MNIST using the aforementioned scheme for allocating accuracy loss. Fig. 11a presents the allocated values for acceptable accuracy loss l,p and the respective measured accuracy loss ∆a l,p and Fig. 11b shows the optimal bitwidths found for the respective weights, biases and activations. We note in Fig. 11a that ∆a l,p ≤ l,p up until all parameters are quantized to their optimal fixed-point representations. After quantization of all the weights, the accuracy loss of the network is approximately ∆a L,W = 0.0036 (0.36%). Keeping the quantized weights, we find that sequential quantization of biases with the constant l,B leaves the accuracy loss almost unchanged. Finally, with the quantized weights and biases, we find that linearly increasing l,A up to L,A to sequentially find fixed-point representations for the activations of all layers results in an accuracy loss that steadily rises and stays below 1% in the end, as was originally desired. In Fig. 11a we also note how the accuracy drop reduces for certain parameters (examples: activations of layer C 8 and weights of layer C 14). In such cases, when dependent OptSearchCNN finds an appropriate optimal bitwidth, the ∆a D l,p ends up being lower than ∆a D l−1,p . The corresponding bitwidth for activations of layer C 8 is also quite low (3 bits). We believe that this may happen due to possible regularization effects or compensating multiplications in that layer that make up for the errors due the quantization of parameters of preceding layers. For this acceptable loss in inference accuracy, dependent OptSearchCNN was able to find optimal fixed-point representations (BW, F ) * l,p with bitwidths of 2-5 bits for weights, 1-3 bits for biases and 3-6 bits for activations as seen in Fig. 11b. We also observe that the bitwidths required to represent the parameter values of each layer vary across the different layers and the parameter types. We discuss these discrepancies in Section VI-E. In Fig. 11b we observe how dependent optimized search is able to adapt each optimal bitwidths of each layer based on how the parameters of the network have already been quantized. When the network is quantized too harshly, the parameters of the successive layer are quantized more conservatively with larger bitwidths to compensate and so this bitwidth varies for each layer. This is solely because of the fact that the evaluation of the accuracy loss now includes the other quantized parameter values in the network. Additional results with the other models and datasets can be found in Appendix D. We experimented with changing the order in which OptSearchCNN looks for optimal fixed-point representations for the parameter types to investigate whether the order in which we performed this mattered, as discussed in Section V-B. We noted 6 possible ways in which we could order the execution of the algorithm for the parameter types. Through our experiments we were able to limit this to two, given that the biases had a very minor effect on the accuracy loss and the bitwidths it was able to find and hence only experimented with (W → B → A) or (A → W → B). While the results for the former approach were presented and discussed, the dependent OptSearchCNN did not work with the latter approach. Concretely, when searching for (BW, F ) * l,W after finding all the (BW, F ) * l,A the OptSearchCNN would often reach a point where the network was quantized too much to be able to find a low bitwidth for which the accuracy loss was acceptable. While we could not find a concrete reason for why (A → W → B) did not work while (W → B → A) clearly did, we noticed that the optimal bitwidths found by dependent OptSearchCNN for activations varied in the two cases. The variation was not conclusive as activation bitwidths were lower for some layers while being higher for others. It is difficult to evaluate why this variation exists given the non-linearity in the network. However, we hypothesize that the multiplyaccumulates might work differently in the two cases, and that may have an effect on the fixed-point representations needed to minimize deviation from of the values from their original floating-point values. From our experiments, weights needed to be quantized first. 3) Final method vs Baseline: We now compare the results of our aforementioned method against the baseline of an 8-bit fixed-bitwidth for all parameters in the network. The aim is to benchmark the resulting quantized model of our algorithm against that generated by common commercial tools on the metrics defined in the constraints in (7), namely accuracy loss, memory consumption and cost of multiplications. Fig. 12 shows the accuracy loss ∆a of the quantized models provided by the two approaches for each pre-trained CNN model based on the two architectures namely Sequential (S) and Branched (B) trained on the four datasets. Fig. 13 shows the results for the calculations of memory consumption and cost of multiplications respectively. These results clearly show that the 8-bit quantized model have lower accuracy losses than those generated by OptSearchCNN. However, given that dependent OptSearchCNN is able to find lower bitwidths for the parameters, the quantized models resulting from our method of dependent OptSearchCNN consume between 42-62% (average 53%) lower memory than the 8-bit fixed-bitwidth approach. The quantized models resulting from dependent OptSearchCNN also have a 60-87% (average 77.5%) lower cost of multiplications as compared with the fixed-bitwidth approach. Given that the energy consumption depends on the cost of its operations, we note that the quantized models from OptSearchCNN would effectively consume less energy than the 8-bit bitwidth models as noted in Fig. 13b. Comparing a 32-bit floating-point precision CNN, an 8-bit implementation provides a significant reduction in memory consumption and energy consumption at the cost of some accuracy loss. However our method finds even more compressed and efficient CNN models by giving up a little more accuracy loss. Compared to the 32-bit floating-point CNN, the quantized models from dependent OptSearchCNN consume an average of 88.4% (8.6x) less memory. With regards to our approach of OptSearchCNN, for these final tests, we noted that in the branched CNNs OptSearchCNN was able to reduce the bitwidths of parameters corresponding to certain branches down to 1 bit, which in our paradigm translates to pruning of the values of a layer. We noticed this to be the case for the branched CNN trained on CIFAR10, SVHN and Fashion-MNIST. Empirical data for the same may be found in Appendix D. We noted that the branches that were pruned were usually of the same type, namely the branch with a pooling layer followed by the convolutional layer in it. This pruning effect had minimal effect on inference accuracy and OptSearchCNN was able to therefore remove redundancies in the network and only keep layers and branches that produced important features. It is clear that while the classification accuracy of the 8-bit implementation is close to that of the full-precision model, it trades-off with having larger bitwidths, which dependent OptSearchCNN is able to optimize for. The quantized models resulting from dependent OptSearch using a linear allocation scheme for acceptable loss results in significantly compressed models with accuracy losses just under 1%. E. Discussion We highlight and potentially explain some general observations presented from our experiments in this paper. 1) Layer-wise quantization: Fig. 11b clearly shows the advantage of finding optimal fixed-point representations for each parameter in the CNN layer-wise. In comparison, while the 8-bit fixed-bitwidth approach resulted in low inference accuracy losses, their quantized models consumed more memory. We also tried using a lower bitwidth than 8-bits for the fixed-bitwidth approach to quantize the models tested in Fig. 13, however this generally led to a large degradation in inference accuracy (> 40%). One reason for this can be seen more closely in Fig. 11b and in other similar figures found in Appendix D, that clearly some layers require larger bitwidths than others in order to minimize the accuracy loss. The weights of the first layer always required a larger bitwidth than the successive layers thereafter. For activations however, this varied as the first and last layers generally required larger bitwidths. Additionally, biases were quantized to low bitwidths for only a few of the layers in the network. Given that the first layer has the least number of parameters, therefore fewer redundancies, and also is the layer that generates the initial feature maps, we expected it to require more bits to sufficiently represent the respective parameters. Additionally, given that we accept a very low accuracy loss for the weights and activations of the first layer, dependent OptSearchCNN quantizes these parameters in the first layer very conservatively. This observation and conclusion is also widely supported in literature as was discussed in Section II, that the first layers need to be quantized conservatively to retain the inference accuracy of the network. From our results in Appendix D and in Fig. 11b, we also often noticed that activations for the last layer required more bits than for the other layers in the network. This data partially supports the work of [4] that notes that conservative quantization is required for the first and last layer to minimize the inference accuracy lost. We reason that since the activations of the last layer classify the image into one of the 10 classes, that quantizing this layer too harshly may result in images being wrongly classified as activations are switched one way or another. Dependent OptSearchCNN using the linear allocation scheme for acceptable inference accuracy loss was successfully able to quantize the initial and last layer conservatively as needed while varying the bitwidths for the parameters per layer, and also compensating for harsh quantization if needed. 2) Parameter types discrepancies: From all our results, whether they pertained to the brute-force plots in Section IV-C or the results of Optimized Search, we noticed that activations generally required equal or larger bitwidths than those needed for weights. From this we concluded that activations were generally more sensitive to quantization than weights, and generally required larger bitwidths to minimize accuracy degradation of the CNN. We found this to be consistent with works in literature, where the quantized models resulting from their approaches also required larger bitwidths for activations than for weights. To investigate this discrepancy, we studied the the distribution of the weights and activations of all layers of the network, similar to Fig. 3. We observed that the distribution of activations had much longer tails than did the distribution of weights. Concretely, activations covered a much larger range of numbers than did weights. Often there were many outlying activations at values much greater than 20 (a few activations with values of 80). On the other hand, the distribution of weights usually followed a bell-shaped curve centered around zero and had values that at most reached 5. Given that the size of our bitwidth gives an indication of the range of values it can cover as noted in (3), clearly larger bitwidths are required to sufficiently represent the entire distribution of activations. As we reduce the bitwidth and fractional offset together, we increase the value of the LSB and therefore lose precision as intermediate values are rounded to discrete values. From our experiments this loss of precision for activations seems to affect the accuracy of the network much more than it does for weights. This was observed in Fig. 4, 5a. Since there were a very small fraction (0.05%) of outlying activations, we also noticed that these could be clipped to a slightly lower bitwidth without much accuracy loss (reduction in 1-2 bits roughly). Any further reduction in the bitwidth and potentially also the fractional offset for the quantization of activations would result in too large of a loss of precision therefore leading to more significant accuracy degradation. Since the distribution of weights were more clustered around zero, and the distribution itself covered a very small range of values, smaller bitwidths could be used to represent them with lower accuracy losses. We recommend further study into the reasons for which quantization of activations have a larger effect on the accuracy of the network do the quantization of weights. Additionally, biases were often pruned (1-bit) which is counter intuitive considering the fact that the biases would have a major contribution to the activations. However, we did not observe this to be the case. From our experiments, we conclude that most of the biases could effectively be removed without any effect on the accuracy of the network. 3) Advantage of Arbitrary Bitwidths: From the results in Fig. 11b and the results in Appendix D, we observed that OptSearchCNN often suggested optimal bitwidths like 2, 3 or 5 bits, essentially bitwidths that do not conform to those used in hardware. While it is quite difficult to create dedicated hardware to support these arbitrary bitwidths, our experiments clearly show that there is an incentive to work towards developing hardware with arbitrary bitwidths as they can allow networks to be quantized and compressed more efficiently, especially with the layer-wise quantization method. 4) Dependent Optimized Search vs K-means approaches: As highlighted in Section II, approaches that utilize K-means clustering to quantize parameters of networks cannot control performance loss due to quantization. Dependent Optimized Search on the other hand finds minimum bitwidths based on a user defined acceptable loss in inference accuracy. This makes the problem more controllable. Another disadvantage is that the K-means approach is only applied to reduce memory consumption for the weights of the model. The RAM used due to the activations is not considered in this process. Using Dependent Optimized Search, we also find low bitwidths to quantize activations therefore reducing the respective RAM requirements. Additionally, while K-means based approaches are able to reduce memory consumption by representing fewer weights, the centroids themselves are 32-bit floating point values therefore still relying on floating-point operations. With Optimized Search, we eliminate the need for floating-point precision entirely by reducing the precision of all parameters to low bitwidth fixed-point representations. 5) Implementation concerns: As mentioned earlier in this paper, we noted that our work was performed in simulation on a CPU and GPU using the Keras API to handle CNN models easily and using Python to perform experiments on the CNN models. The main reason for our work in simulation is the fact that Keras does not allow for the use of integer precision. We therefore needed to scale the values back down to floatingpoint using 2 F in (1). With the quantization function in (1), we simulated quantization by creating a map between floating-point numbers and the fixed-point equivalent values in floating-point. Therefore, the quantized parameters of the CNN resulting from OptSearchCNN still uses floating-point precision to represent the fixed-point numbers that we have mapped the original parameters to. All the calculations involving low-precision fixed-point numbers are still in floating-point and therefore occur on floating-point ALUs in the CPU and GPU. Therefore we cannot directly measure the benefit of the decrease in computational cost and memory consumption using our setup. To truly evaluate the performance benefit of our quantized model, inference of the quantized CNN would have to be implemented in a lower level language like C and the parameters would have to be represented as integers (simply removing division by 2 F in (1)). Testing the resulting C program on a CPU would show the true benefit with respect to computational cost and memory consumption as CPUs will rely on integer ALUs if all the numbers in the network including the input images are represented as integers. The quantized model can also then be implemented on more constrained hardware like micro-controllers and FPGAs, possibly requiring some additional lower-level optimizations for memory and computations. However, evaluating the performance benefit would be quite difficult considering the fact that our low-precision model relies on arbitrary bitwidths for the parameters which are not supported by CPUs and other common hardware today. Therefore we would have to round up the optimal bitwidths that we found to bitwidths of 4, 8 or 16 bits. In such a case, the memory consumption and computational cost would be higher than what we found through our experimentation. However, the model would still be optimized using layer-wise quantization. This is where the baseline approach and papers such as [2] have an advantage, in that their quantized models use bitwidths of 8 and 4 bits respectively and can therefore be implemented and tested on hardware such as CPUs, microcontrollers etc. In any case, we believe that our work should provide some incentive into developing hardware that supports arbitrary bitwidths, especially for neural networks as we clearly observe benefits from it from our experiments. 6) Applicability and Limitations: Our work and the resulting conclusions were drawn based on the experiments on two types of architectures trained on four common datasets. However a lot of the conclusions we drew supported the observations and conclusions seen in literature. For example, the conclusion that the first and last layers required larger bitwidths and more conservative quantization is also supported by works in the literature. We also expect the conclusions drawn from brute-force analysis to be applicable to quantization of parameters on other CNNs trained on the other datasets therefore allowing our algorithm to work on these other CNNs and datasets as well. To further validate the conclusions we have drawn, we recommend testing the algorithm on more datasets like ImageNet, and on much larger and more complex models like the ones commonly tested in the literature. Due to time constraints, we chose to restrict ourselves to models that could quickly be trained on a desktop/laptop GPU within an hour to allows us to collect more results. Additionally, the more complex models like AlexNet and ResNet are quite large in size and would not be ones that would be ported to constrained hardware such as micro-controllers, given the small amount of memory on-chip. We do also recommend testing our algorithm on CNNs for regression tasks, image denoising tasks, segmentation tasks, CNNs with skip connections (ResNet) and for some other common tasks involving CNNs with other varied architectures. Our algorithm should generally apply to the other aforementioned cases given that we simply look at how to minimize the bitwidth of a group of numbers such that the accuracy loss (an evaluation metric) is within acceptable bounds. We can expect to find some new observations and draw some new conclusions for those cases. We limited ourselves to CNNs for image classification to restrict scope and also because it is a common application. VII. CONCLUSIONS AND FURTHER WORK In this paper, we designed a pipeline to quantize the parameters of each layer of a pre-trained CNN based on a quantization function that converted floating-point numbers to a fixed-point representation characterized by a bitwidth and fractional offset. Using this pipeline, we analyzed the effects of changing the fixed-point representations used to quantize the respective parameters on the CNN on the inference accuracy. The predictable pattern observed was the basis for the design of a method (OptSearchCNN) to efficiently search for the optimal bitwidths and fractional offsets for all parameters for each layer of a given pre-trained CNN. We also investigated two approaches (independent and dependent) to using this method that was based on whether we took the errors due to quantization of other parameter values into account. Results showed that sequentially quantizing the network in a controlled manner by controlling the acceptable accuracy loss at each stage proves to give the best results with respect to accuracy loss, memory consumption and cost of multiplications, at least for the two architectures trained on the four datasets that we tested our work on. Ignoring the effects of quantization from other parameters in the network proves to be error prone resulting in a CNN with large losses in inference accuracy. Our resulting method, dependent optimized search was then compared against a common baseline approach proving to be advantageous due to the lower bitwidths that OptSearchCNN was able to find. Following are our recommendations for further work: • Swapping the order of the for-loops in OptSearchCNN thereby interleaving the search for fixed-point representations and searching across all parameters sequentially per layer, as compared to our current approach where we execute optimized search one parameter at a time for all layers. • Using a more fine-grained approach by quantizing kernels differently rather than the layer-wise granularity used in this paper • Investigating whether the optimal bitwidths found by OptSearchCNN are over-fit to the dataset used and if the inference accuracy would vary when testing this quantized model on new data • Implementing the resulting quantized model with the bitwidths suggested by OptSearchCNN on a hardware platform to understand how close our simulated results measure up in reality. • Execute OptSearchCNN on larger and more complex models (AlexNet, ResNet etc.) with more complex datasets like ImageNet. • Investigating why certain weights/activations distributions can be quantized to a lower precision than others and if this has something to do with the properties of the CNN. • Using this approach on other types of Neural networks for other types of problems (Examples: Regression, Segmentation, RNNs, Image Denoising). • Development of hardware to support arbitrary bitwidths • Investigating why the accuracy losses due to quantized activations are generally larger than the accuracy losses due to quantized weights • Investigating the reasons why the order in which parameters are quantized matters. • Testing OptSearchCNN with varied acceptable accuracy losses to determine the least acceptable accuracy loss possible for the given CNN trained on a dataset. • Investigating the relationship between the quantization error after quantizing a certain distribution of parameters and the accuracy loss of the network. • Development of a method to quantize parameters of each layer independently of other parameters in the CNN. This would then allow for parameters in the network to be quantized in a random order. Fig. 14 -Fig. 19 present the additional results from Brute Force analysis on CNNs. While we ran Brute Force analysis on all the models and datasets we had available for experimentation, we only present a few here as the other results are similar. Fig. 20 -Fig. 27 presents additional results for independent quantization. Work of Section VI-B. B. Additional results for independent quantization C. Additional results for dependent quantization Fig. 28 -Fig. 32 presents additional results for dependent quantization. Work of Section VI-C. 27 (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. D. Additional results for the final method (b) Inference accuracy loss of the CNN ∆a every time a parameter of a layer is quantized to the respective (BW, F ) * l,p found using Independent Optimized Search. Parameters of the network are quantized sequentially in the order of weights, biases followed by activations from layers 1 to L. Fig. 20: Inference accuracy loss measured using two ways after quantizing parameters to their fixed-point representations of the pre-trained 15-layer Sequential CNN trained on MNIST using independent Optimized Search. Acceptable loss l,p = 0.3%. (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. (b) Inference accuracy loss of the CNN ∆a every time a parameter of a layer is quantized to the respective (BW, F ) * l,p found using Independent Optimized Search. Parameters of the network are quantized sequentially in the order of weights, biases followed by activations from layers 1 to L. Fig. 21: Inference accuracy loss measured using two ways after quantizing parameters to their fixed-point representations of the pre-trained 15-layer Sequential CNN trained on SVHN using independent Optimized Search. Acceptable loss l,p = 0.3%. 28 (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. (b) Inference accuracy loss of the CNN ∆a every time a parameter of a layer is quantized to the respective (BW, F ) * l,p found using Independent Optimized Search. Parameters of the network are quantized sequentially in the order of weights, biases followed by activations from layers 1 to L. Fig. 22: Inference accuracy loss measured using two ways after quantizing parameters to their fixed-point representations of the pre-trained 15-layer Sequential CNN trained on CIFAR10 using independent Optimized Search. Acceptable loss l,p = 0.3%. 29 (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. (b) Inference accuracy loss of the CNN ∆a every time a parameter of a layer is quantized to the respective (BW, F ) * l,p found using Independent Optimized Search. Parameters of the network are quantized sequentially in the order of weights, biases followed by activations from layers 1 to L. (c) Optimal bitwidths BW * l,p of parameters of each layer Fig. 24: Inference accuracy loss measured using two ways after quantizing parameters to their fixed-point representations of the pre-trained 23-layer Branched CNN trained on MNIST using independent Optimized Search. Acceptable loss l,p = 0.3%. (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. (b) Inference accuracy loss of the CNN ∆a every time a parameter of a layer is quantized to the respective (BW, F ) * l,p found using Independent Optimized Search. Parameters of the network are quantized sequentially in the order of weights, biases followed by activations from layers 1 to L. 30 (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. (b) Inference accuracy loss of the CNN ∆a every time a parameter of a layer is quantized to the respective (BW, F ) * l,p found using Independent Optimized Search. Parameters of the network are quantized sequentially in the order of weights, biases followed by activations from layers 1 to L. (c) Optimal bitwidths BW * l,p of parameters of each layer Fig. 26: Inference accuracy loss measured using two ways after quantizing parameters to their fixed-point representations of the pre-trained 23-layer Branched CNN trained on SVHN using independent Optimized Search. Acceptable loss l,p = 0.3%. (a) Inference accuracy loss ∆a I l,p for parameters of each layer after finding its optimal (BW, F ) l,p using Independent Optimized Search while keeping other parameters at full precision. (b) Inference accuracy loss of the CNN ∆a every time a parameter of a layer is quantized to the respective (BW, F ) * l,p found using Independent Optimized Search. Parameters of the network are quantized sequentially in the order of weights, biases followed by activations from layers 1 to L. 34 (a) Acceptable and measured inference accuracy loss ∆a D l,p after sequentially quantizing the parameters p l in the order (W → B → A) from layers 1 to L. (b) Optimal bitwidths BW * l,p of parameters of each layer. Fig. 33: Results for acceptable and measured inference accuracy loss and the optimal bitwidths of the quantized CNN resulting from dependent optimized search using a linear acceptable loss allocation scheme on a 15-layer sequential CNN trained on Fashion-MNIST. (a) Acceptable and measured inference accuracy loss ∆a D l,p after sequentially quantizing the parameters p l in the order (W → B → A) from layers 1 to L. (b) Optimal bitwidths BW * l,p of parameters of each layer. Fig. 34: Results for acceptable and measured inference accuracy loss and the optimal bitwidths of the quantized CNN resulting from dependent optimized search using a linear acceptable loss allocation scheme on a 15-layer sequential CNN trained on SVHN. 35 (a) Acceptable and measured inference accuracy loss ∆a D l,p after sequentially quantizing the parameters p l in the order (W → B → A) from layers 1 to L. (b) Optimal bitwidths BW * l,p of parameters of each layer. Fig. 35: Results for acceptable and measured inference accuracy loss and the optimal bitwidths of the quantized CNN resulting from dependent optimized search using a linear acceptable loss allocation scheme on a 15-layer sequential CNN trained on CIFAR10. (a) Acceptable and measured inference accuracy loss ∆a D l,p after sequentially quantizing the parameters p l in the order (W → B → A) from layers 1 to L.
2021-02-04T02:16:17.974Z
2021-02-03T00:00:00.000
{ "year": 2021, "sha1": "2b482a904ac655c1c7fb618cdb55e74c4d8ea68f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2b482a904ac655c1c7fb618cdb55e74c4d8ea68f", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
2134840
pes2o/s2orc
v3-fos-license
Joint Rate and SINR Coverage Analysis for Decoupled Uplink-Downlink Biased Cell Associations in HetNets Load balancing by proactively offloading users onto small and otherwise lightly-loaded cells is critical for tapping the potential of dense heterogeneous cellular networks (HCNs). Offloading has mostly been studied for the downlink, where it is generally assumed that a user offloaded to a small cell will communicate with it on the uplink as well. The impact of coupled downlink-uplink offloading is not well understood. Uplink power control and spatial interference correlation further complicate the mathematical analysis as compared to the downlink. We propose an accurate and tractable model to characterize the uplink SINR and rate distribution in a multi-tier HCN as a function of the association rules and power control parameters. Joint uplink-downlink rate coverage is also characterized. Using the developed analysis, it is shown that the optimal degree of channel inversion (for uplink power control) increases with load imbalance in the network. In sharp contrast to the downlink, minimum path loss association is shown to be optimal for uplink rate. Moreover, with minimum path loss association and full channel inversion, uplink SIR is shown to be invariant of infrastructure density. It is further shown that a decoupled association---employing differing association strategies for uplink and downlink---leads to significant improvement in joint uplink-downlink rate coverage over the standard coupled association in HCNs. I. INTRODUCTION Supplementing existing cellular networks with low power access points (APs), generically referred to as small cells, leads to wireless networks that are highly heterogeneous in AP max transmit powers and deployment density [1], [2]. Although the mathematical modeling and performance analysis -particularly for downlink -for HCNs has received significant attention in recent years (see [3] for a survey), attempts to model and analyze the uplink have been limited. In popular uplink intensive services like cloud storage and video chat, uplink performance is as important (if not more) as that of the downlink. Moreover, in services like video chat, the traffic is symmetric and thus what really matters is the ability to achieve the required QoS both in uplink and downlink. The insights for downlink design cannot be directly extrapolated to the uplink setting in HCNs, as the latter is fundamentally different due to (i) the homogeneity of transmitters or user equipments (UEs), (ii) the use of uplink transmission power control to the desired AP, and (iii) the correlation of the interference power from a UE with its path loss to its own serving AP. A. Background and related work Load balancing and power control. Due to the large AP transmission power disparity across different tiers in HCNs, the nominal UE load per AP (under downlink maximum power association) is highly imbalanced, with macrocells being significantly more congested than small cells. It is now well established (both empirically and theoretically) that biasing UEs towards small cells leads to significant improvement in downlink throughput (see [1], [2], [4] and references therein). In conventional homogeneous macrocellular networks, coupled associations are used, wherein the UE is paired with the same AP for both uplink and downlink transmission. Traditionally, this association has been based on the maximum downlink received power as measured at the UE, which also led to a max-uplink power association with the same AP, since the downlink and uplink channels are nearly reciprocal in terms of shadowing and path loss and all APs and UEs had essentially the same transmit powers, respectively. However, this is clearly not the case in HCNs with load balancing. Biasing UEs towards small cells with a coupled association not only improves the downlink rate (despite a lower SINR) due to the load balancing aspect, but it simultaneously improves the uplink signal-to-noise-ratio (SNR). This is because the offloaded UEs now on average transmit to APs, which are closer, since they are more likely to transmit to a nearby small cell whose downlink power was not large enough to associate with in the absence of biasing. It is dubious, though, whether the bias designed to encourage downlink offloading would also be optimal for the uplink. Since transmit power is a critical resource at a UE, power control is employed to conserve energy and also to reduce interference. 3GPP LTE networks support the use of fractional power control (FPC), which partially compensates for path loss [5]. In FPC, a UE with path loss L to its serving AP transmits with power L , where 0 ≤ ≤ 1 is the power control fraction (PCF). Thus, with = 0, each UE transmits with constant power, and with = 1, the path loss is fully compensated corresponding to channel inversion. From a network point of view, can be interpreted as a fairness parameter, where a higher PCF helps the cell edge users meet their SINR target but generates higher interference [6]- [11]. Since the association strategy influences the statistics of path loss in HCNs, the aggressiveness of power control should be correlated with the association strategy. Therefore, it is important to develop an analytical model to capture the interplay between load balancing and power control on the uplink performance. This is one of the goals of this paper. Uplink analysis. The use of spatial point processes, particularly the homogeneous Poisson point process (PPP), for modeling HCNs and derivation of the corresponding downlink coverage and rate under various association and interference coordination strategies has been extensively explored as of late (see [3] and references therein). The homogeneous PPP assumption for AP location not only greatly simplifies the downlink interference characterization, but also comes with empirical and theoretical support [12]- [15]. However, analysis of the uplink in such a setting is highly non-trivial, as the uplink interference does not originate from Poisson distributed nodes (UEs here). This is because in orthogonal multiple access schemes, like OFDMA, there is one UE per AP located randomly in the AP's association area that transmits on a given resource block. As a result, the uplink interference can be viewed as stemming from a Voronoi perturbed lattice process (see [16] for more discussion), for which an exact interference characterization is not available. Moreover, due to the uplink power control, the transmit power of an interfering UE is correlated with its path loss to the AP under consideration. Consequently, various generative models [10], [17], [18] have been proposed to approximate uplink performance in OFDMA Poisson cellular networks. Most of these models, however, only apply to certain special cases such as (macro-only) for single tier networks [10] or full channel inversion with truncation and nearest AP association [17]. They do not extend naturally to HCNs with flexible power control and association. The recent work in [18], however, adopts a similar approach to the one proposed in this paper for approximating the interfering UE process to derive the uplink SIR distribution in a two tier network with a (simpler) linear power control and biased association. All these generative models, however, ignore the aforementioned conditioning, which may yield unreliable performance estimates. Also, none of these prior works characterizes the impact of load balancing on the uplink rate distribution or the joint uplink-downlink rate coverage. Joint uplink-downlink coverage. When UEs employ different association policies for uplink and downlink, called decoupled association [19]- [21]), it results in possibly different APs serving the user in the uplink and downlink. Characterizing the correlation between the respective uplink and downlink path losses is then vital for the joint coverage analysis. Such a correlation analysis was addressed in the recent work [20] for the special case of a two-tier scenario with max-received power association for downlink and nearest AP association for uplink. However, the uplink coverage in [20], [21] was derived assuming the interfering user process follows a homogeneous PPP, which is not accurate for uplink analysis (as discussed above). The analysis in this paper also addresses the joint uplink-downlink rate and SINR in greater generality with an arbitrary association and number of tiers. Traditional coupled association is a special case of this general setting. B. Contributions and outcomes A novel generative model is proposed to analyze uplink performance, where the APs of each tier are assumed to be distributed as an independent homogeneous Poisson point process (PPP) and all UEs employ a weighted path loss based association and FPC. The interfering UE locations are approximated as an inhomogeneous PPP with intensity dependent on the association parameters. Further, the correlation between the uplink transmit power of each interfering UE and its path loss to the AP under consideration is captured. Based on this novel approach, the contributions of the paper are as follows: Uplink SINR and rate distribution. The complementary cumulative distribution function (CCDF) of the uplink SINR and rate are derived for a K-tier HCN as a function of the association (tier specific) and power control parameters in Sec. III. The general expression is simplified for certain plausible scenarios. Simpler upper and lower bounds are also derived. Joint uplink-downlink rate coverage. The joint rate/SINR coverage is defined as the joint probability of uplink and downlink rate/SINR exceeding their respective thresholds. The joint coverage is derived in Sec. IV by combining the derived analysis of uplink coverage with the characterization of joint distribution of uplink-downlink path losses for arbitrary uplink and downlink association weights. The uplink and downlink interference is, however, assumed independent for tractability. The analysis of Sec. III and IV (and the involved assumptions) are validated by comparing with simulations in Sec. V for a wide range of parameter settings, which builds confidence in the following design insights. Insights. Using the developed model, it is shown, in Sec. VI, that: • the PCF maximizing uplink SIR coverage is inversely proportional to the SIR threshold. As a result, edge users prefer a higher PCF as compared to that of cell interior users. A similar result was shown in [10] for macrocellular networks. • With increasing disparity in association weights across various tiers, the optimal PCF increases across all SIR thresholds. • Minimum path loss association (i.e. same association weights for all tiers) leads to optimal uplink rate coverage. This is in contrast to the corresponding result for downlink [4], [15], [22]. • For minimum path loss association and full channel inversion based power control, the uplink SIR coverage is independent of infrastructure density in multi-tier networks 1 . This trend is similar to that in downlink HCNs [23], [24]. However, the corresponding uplink SIR is shown to be stochastically dominated by that of downlink. • With a static uplink-downlink resource allocation ratio, the uplink and downlink association weights that maximize their respective coverage also maximize the uplink-downlink joint coverage. • As a result, a decoupled association-employing different association weights for uplink and downlink-maximizes joint uplink-downlink rate coverage. 1 A similar result was shown in [17] under a different deployment model for interfering UEs. 6 II. SYSTEM MODEL A co-channel deployment of a K-tier HCN is considered, where the locations of the APs of the k th tier are modeled as a 2-D homogeneous PPP Φ k ⊂ R 2 of density λ k . All APs of tier k are assumed to transmit with power P k . Further, the UEs in the network are assumed to be distributed according to an independent homogeneous PPP Φ u with density λ u . The signals are assumed to experience path loss with a path loss exponent (PLE) α and the power received from a node at X ∈ R 2 transmitting with power P X at Y ∈ R 2 is P X H X,Y L(X, Y ) −1 , where H ∈ R + is the fast fading power gain and L is the path loss. The random channel gains are assumed to be Rayleigh distributed with unit average power, i.e., H ∼ exp(1), and L(X, Y ) S X,Y X − Y α , where S ∈ R + denotes the large scale fading (or shadowing) and is assumed i.i.d across all UE-AP pairs but the same for uplink and downlink. The small scale fading gain H is assumed i.i.d across all links. WLOG, the analysis in this paper is done for a typical UE located at the origin O. The AP serving this typical UE is referred to as the tagged AP. A. Uplink power control Let B X ∈ Φ denote the AP serving the UE at X ∈ R 2 and define L X L(X, B X ) to be the path loss between the UE and its serving AP. A fractional pathloss-inversion based power control is assumed for uplink transmission, where a UE at X transmits with a power spectral density (dBm/Hz) P X = P u L X , where 0 ≤ ≤ 1 is the power control fraction (PCF) and P u is the open loop power spectral density [5]. Thus, the total transmit power of a user depends on the spectral resources allocated to the user and it's path loss. For tractability, the per user maximum power constraint is ignored in this paper. However, if the dependence of transmit power on load (or resources) is ignored, the analysis in this paper can be extended to incorporate a maximum power constraint similar to [17]. Orthogonal access is assumed in the uplink without multi-user transmission, i.e., there is only one UE transmitting in any given resource block. Let Φ b u be the point process denoting the location of UEs transmitting on the same resource as the typical UE. Therefore, Φ b u is not a PPP but a Poisson-Voronoi perturbed lattice (per [16]). The uplink SINR of the typical UE (at O) on a given resource block is where SNR PuGL 0 N 0 with N 0 being the thermal noise spectral density, G being the antenna gain at the tagged AP, and L 0 is the free space path loss at a reference distance. Henceforth channel power gain between interfering UEs and the tagged AP {H X,B O } are simply denoted by {H X }. The index 'O' of the typical user is dropped wherever implicitly clear. B. Weighted path loss association Every UE is assumed to be using weighted path loss for both uplink and downlink association in which a UE at X associates to an AP of tier K X in the uplink, where with L min,k (X) = min Y ∈Φ k L(X, Y ) is the minimum path loss of the UE from k th tier and T k is the uplink association weight for APs in the k th tier. The downlink association is similar with possibly different per tier weights denoted by {T k } K k=1 and the selected tier denoted by K X . The presented association encompasses biased cell association, where T k = P k B k with P k and B k being the transmit power of APs of k th tier and the corresponding bias respectively. Note that if all the association weights are identical, it results in minimum path loss association. For ease of notation, we defineT k As a result of the above association model, the uplink association cell of an AP of tier k located at X is The downlink association cell can be similarly defined. Note that the described association strategy (both for uplink and downlink) is stationary [25] and hence the resulting association cells are also stationary. The uplink association cells in a two tier setting with P 1 P 2 = 20 dB resulting from downlink max power association and minimum path loss association are contrasted in Fig. 1. It is assumed that each AP has at least one user in its association region with data to transmit in uplink. Further, the AP queues for downlink transmission are assumed to be saturated implying that each AP always has data to transmit in downlink. The fraction of resources reserved for the where W is the bandwidth, N denotes the total number of uplink or downlink users sharing the γ fraction of resources, γ = η for uplink and 1 − η for downlink. The notation used in this paper is summarized in Table I. III. UPLINK SINR AND RATE COVERAGE This is the main technical section of the paper, where we detail the proposed uplink model and the corresponding analysis. A. General case The uplink SIR CCDF of the typical UE is where is the uplink interference at the tagged AP B, and L I|K=k is the Laplace transform of interference conditional on k th tier being the serving tier. The following Lemma characterizes the path loss distribution of a typical UE in the given system model. Lemma 1. Path loss distribution at the desired link. The probability distribution function (PDF) of the path loss of a typical UE to its serving AP is , and the PDF, conditioned on the serving the tier being k, is G k is the probability of the typical UE associating with tier k. Proof: The proof follows by generalizing the results in [13], [26] to our setting. Define the propagation process (introduced in [13]) from APs of tier j to the typical user as N j . Path loss to the tagged AP of tier k has the CCDF . The above distribution is not, however, identical to the distribution of the path loss between an interfering UE and its serving AP, since the latter is the conditional distribution given that the interfering UE does not associate with the tagged AP. This correlation is formalized in the corollary below. Corollary 1. Path loss distribution at an interfering UE. The PDF of the path loss of a UE at X associated with tier j, conditioned on it not lying in the association cell (C B ) of the tagged AP at B of tier k and the corresponding path loss L(X, B) = y, is Proof: An interfering UE at X cannot associate with the tagged AP of tier k which, given the association policy, implies that the corresponding path loss is bounded as Due to uplink orthogonal access within each AP, only one UE per AP transmits on the typical resource block and hence contributes to interference at the tagged AP. Therefore, Φ b u is not a PPP but a Poisson-Voronoi perturbed lattice (per [16]) and hence the functional form of the interference (or the Laplace functional of Φ b u ) is not tractable. Based on the following remark, we propose an approximation to characterize the corresponding process as an inhomogeneous PPP. Remark 1. Thinning probability. Conditioned on an AP of tier k being located at V ∈ R 2 , a UE at U ∈ R 2 associates with V with probability P(B U = V ) = exp(−G k L(V, U ) δ ). The basis of the above assumption is Remark 1 along with the fact that only one UE per AP can potentially interfere with the typical UE in the uplink. Thus, the maximum density of UEs that might potentially interfere in the uplink from tier j is λ j . Assuming this parent process to be a PPP with density λ j , the propagation process of these UEs to the tagged AP has intensity measure function δa j x δ−1 . However, the intensity of this parent process has to be appropriately thinned as per Remark 1 to account for the fact that these UEs do not associate with the tagged AP. The resulting process N u,j has an intensity that increases with increasing path loss from the tagged AP. The methodology proposed in [18] for modeling non-uniform intensity of Φ u,b was based on a curve-fitting based approach and hence may not be accurate for more diverse system parameters. Assumption 2. Tier-wise independence. The point process of interfering UEs from each tier are assumed to be independent, i.e., the intensity measure of the interfering UEs propagation Assumption 3. Independent path loss. The path losses {L X } X∈Φ b u are assumed to follow the Gamma distribution given by Corollary 1, assumed independent (but not identically distributed). Proof: See Appendix A. Using the above Lemma and (4), the uplink SINR coverage is given in the following Theorem. Theorem 1. The uplink SINR coverage probability for the proposed uplink generative model is The SIR coverage can be derived by letting SNR → ∞ in the above theorem. Corollary 2. The uplink SIR coverage probability for the proposed uplink generative model is The coverage expression for the most general case involves two folds of integrals and a lookup Proof: See Appendix B. Remark 2. It can be noted from the above proof that the coverage upper bound is, in fact, exact for full channel inversion, i.e., = 1. Corollary 4. The uplink SIR coverage is lower bounded by Proof: See Appendix C. B. Special cases For the following plausible special cases, the uplink SIR coverage expression is further simplified. Corollary 5. (K = 1) The uplink SIR coverage in a single tier network with density λ 1 is where a 1 = λ 1 πE S δ . The above expression differs from the one in [10] due to the proposed interference characterization. In [10], the distribution of path loss of each interfering UE to its serving AP was assumed i.i.d. The uplink SIR coverage in a K-tier network with min-path loss association is the same as the coverage of a single tier network with density λ = K k=1 λ k . Corollary 8. ( = 1) With full channel inversion, the coverage is Corollary 9. ( = 0, T j = T k ∀j, k) Without power control and with min path loss association, the uplink SIR coverage is Corollary 10. ( = 1, T j = T k ∀j, k) With full channel inversion based power control and with min path loss association, the uplink SIR coverage is This trend is similar to the result proved for downlink SIR in macrocellular networks [12] and HCNs [23], [24]. C. Uplink rate distribution The rate of a user depends on both the SINR and load at the tagged AP (as per (3)), which in turn depends on the corresponding association area |C B |. The weighted path loss association and PPP placement of APs leads to complex association cells (see Fig. 1) whose area distribution is not known. However, the association policy is stationary [25] and hence the mean uplink association area of a typical AP of tier k is A k λ k . The association area approximation proposed in [15] is used to quantify the uplink load distribution at the tagged AP as , n ≥ 1. (3), and assuming the independence between SINR and load, the uplink rate coverage is given in the following Theorem. Using Corollary 2 and Theorem 2. Under the presented system model and assumptions, the uplink rate coverage is given by where P k is given in Corollary 2 andρ ρ(ηW) −1 . Proof: Using the rate expression in (3) whereρ = ρ(ηW) −1 is the normalized rate threshold. Since APs with larger association regions have higher load and larger user to AP distance, therefore the load and SINR are correlated. For tractability, this dependence and thermal noise are ignored, as in [15], to yield P SINR > 2ρ n − 1|K = k, N = n ≈ P k (2ρ n − 1). Corollary 11. If the load at each AP is approximated by its respective mean,N k E [N |K = k] = 1 + 1.28 A k λu λ k [15], the uplink rate coverage is The corollary above simplifies the rate coverage expression of Theorem 2 by eliminating a sum and sacrificing a bit of accuracy. IV. JOINT UPLINK-DOWNLINK RATE COVERAGE The joint uplink-downlink rate coverage is defined formally below. Definition 1. The uplink-downlink joint rate coverage is the probability that the rate on both links exceed their respective thresholds, i.e., It can be equivalently interpreted as the fraction of users in the network whose both uplink and downlink rate exceed their respective thresholds. For deriving the joint coverage, the joint path loss distribution needs to be characterized. For the special case of coupled association, the path losses are identical, however, for the general case they are correlated. The following Lemma characterizes the joint distribution of path losses for arbitrary downlink and uplink association weights. Lemma 3. Joint path loss distribution. The joint PDF of uplink path loss (L) and downlink path loss (L ) for the typical user under the given setting is And for K = k, K = j with k = j and Differentiating the above CCDFs leads to the corresponding PDFs. The downlink SIR analysis in [23] ignored shadowing. However the analysis can be adapted to the presented setting to give the Laplace transform of the downlink interference in the following Lemma (presented without proof). Theorem 3. Using the mean load approximation for uplink and downlink, and assuming the uplink and downlink interference to be independent, the joint uplink-downlink rate coverage is Proof: The proof follows by noting that the joint rate coverage can be written in terms of joint SIR coverage as in Theorem 2. Assuming independence of uplink and downlink interference, the joint SIR coverage is The final expression is then obtained by using the rate model of (3) closely for a range of parameters, validating Assumptions 1, 2, and 3; 2) neglecting the proposed thinning and/or conditioning (as is done prior works) leads to significant diversion from actual coverage, and 3) thermal noise has a minimal impact on uplink SINR (this could also be due to the higher BS density). Note that a value ofT 2 = −20 dB corresponds to a typical power difference between small cells and macrocells and hence is equivalent to downlink maximum power association. The rate coverage obtained from simulation and analysis (Corollary 11) is compared for a two-tier setting in Fig. 3a and for a three-tier setting in Fig. 3b. The user density used in these plots is λ u = 200 per sq. km. The joint rate distribution derived from analysis and simulation is shown in Fig. 4 for an uplink resource fraction η = 0.5. The close match between analysis and simulations for a wide range of parameters in these plots validates the mean load assumption and the downlink-uplink interference independence assumption. VI. OPTIMAL POWER CONTROL AND ASSOCIATION The uplink SIR and rate coverage probability expressions of Corollary 2, Theorem 2, and 3 can be used to numerically find the optimal power control and association weights. However, first we focus on the coverage lower bound P l of Corollary 4 and obtain the following proposition. Proof: Using Corollary 4, P l is maximized with {T * j } given by where the last equation is minimized with T j = T k ∀j, k. Moreover, for such a case P l (τ ) = exp −τ δ π 2 δ sin(πδ) sin(π ) , which is maximized for = 0.5. Remark 5. Since the lower bound overestimates the uplink interference by neglecting the correlation of the transmit power of an interfering user with its path loss to the tagged AP (and hence treating it as if originating from an ad-hoc network), the result of optimal PCF of 0.5 is in agreement with results for ad hoc wireless networks [27], [28] (derived under quite different modeling assumptions, though). Power control. Since the power control impacts only uplink SIR and not load (unlike association), the optimal PCF is obtained using the SIR coverage of Corollary 2. The SIR threshold plays a vital role in determining the optimal PCF. More channel inversion is more beneficial for cell edge UEs, as they suffer from higher path loss and as a result the optimal PCF decreases with SIR threshold, as shown in Fig. 5. This is similar to the insight obtained for single-tier networks in [10]. It is interesting to note that the result on optimal PCF of Proposition 1 applies only to moderate SIR thresholds. Further, as can be observed a higher association weight imbalance leads to a uniform (across all thresholds) increase in the optimal PCF, as the path losses in the network increase. It can also be observed that the optimal PCF is relatively insensitive to different densities in the two tier network, with no dependence seen in the case of minimum path loss association. A similar trend translates to uplink rate distribution too. The variation of uplink fifth percentile rate (or edge rate, ρ|R(ρ) = 0.95) and median rate (ρ|R(ρ) = 0.50) with PCF is shown in Fig. 6. A higher PCF maximizes fifth percentile rate than that for median rate, since former represents users with lower uplink SIR. Uplink association weights. The variation of uplink SIR coverage with association weights is shown in Fig. 7 for different PCFs and SIR thresholds. Association weights are seen to affect the SIR coverage nominally, except for the no power control case (where the variation is in concurrence with the result of the Proposition 1). An intuitive explanation of this behavior is as follows: higher weight imbalance may lead a user to associate with a farther macrocell with a higher path loss, but it would also experience reduced uplink interference due to the larger It is worth noting here that minimum path loss association leads to identical load distribution across all APs and hence balances the load. Moreover due to no adverse effect on uplink SIR, minimum path loss association is also seen to be optimal from rate perspective too. The trend of uplink edge (fifth percentile) and median rate with association weights is shown in Fig. 8. As can be seen, irrespective of the PCF and density, minimum path loss association is optimal for uplink rate. Note that these results and insights for uplink are in contrast with the corresponding result for downlink, where maximum SIR association (equivalent to maximum downlink received power association) is optimal for downlink SIR coverage [23], and hence a conservative association bias 3 was shown to be optimal for rate coverage [22], [29]. Uplink-downlink jointly optimal association. Considered separately, as discussed in the previous section, the association weights T 2 T 1 = 0 dB, Fig. 9 for three pairs of ( , η) with a rate threshold of ρ u = ρ d = 128 Kbps and λ 2 = 6λ 1 . As can be seen from the plots, the uplink and downlink association weights of = −14 dB (T 1 = T 1 = 1 in these plots) also maximize the joint uplink-downlink rate coverage irrespective of chosen η and 4 . These leads to two key observations: (i) the uplink 3 A bias of ∼ 6 dB was shown to maximize edge and median rates in downlink [22], [29] with 20 dB power difference between macro and small cell, which translates to T 2 /T 1 = −14 dB for the setting of this paper. 4 Other pairs of ( , η) also led to similar results. for different ( , η) pairs. and downlink association weights that maximize the joint rate coverage are the same as the ones that maximize their individual link coverage, and as a result (ii) decoupled association, i.e. different association weights for the uplink and downlink, is optimal for joint coverage. Optimal coupled vs. decoupled association. In Fig. 10, the gains of optimal decoupled association over that of coupled are analytically assessed for edge rate and median rates for varying PCFs with λ 2 = 6λ 1 and η = 0.5. Note that in these plots, the rates corresponds the minimum of uplink and downlink, i.e. edge rate = ρ|R J (ρ, ρ) = 0.95 and median rate = ρ|R J (ρ, ρ) = 0.5. As observed, across all PCFs, the decoupled association provides significant (∼ 1.5x) gain over coupled association. This shows that, in spite requiring certain architectural changes [20], decoupled association is beneficial for applications requiring similar QoS in both uplink and downlink. VII. CONCLUSION This paper proposes a novel model to analyze uplink SINR and rate coverage in K-tier HCNs with load balancing. To the best of the authors' knowledge, this is the first work to derive and validate the uplink rate distribution in HCNs incorporating offloading and fractional power control. One of the key takeaways from this work is the contrasting behavior exhibited by the uplink and downlink rate distributions with respect to load balancing. The derivation of uplink SINR and rate distribution as a tractable functional form of system parameters opens various areas to gain further design insights. For example, optimal association weights were derived in this paper for both uplink and joint uplink-downlink coverage. We assumed parametric but fixed resource partitioning between uplink and downlink -and this might also be a more practical assumption -but analyzing the impact of more dynamic (possibly load-aware) partitioning on the presented insights could be considered in the future. The proposed uplink interference characterization can also be used to analyze systems like massive MIMO, where it plays a crucial role [30]. Performance analysis for decoupled association incorporating the cost of possible architectural changes [20] could also be one area of future investigation. ACKNOWLEDGMENT The authors appreciate helpful feedback from Xingqin Lin. APPENDIX A Derivation of Lemma 2: Let L I kj (s) denote the Laplace transform of the interference from tier j UEs, then L I k = K j=1 L I kj (from Assumption 2). Now,
2015-03-23T23:14:11.000Z
2014-12-04T00:00:00.000
{ "year": 2014, "sha1": "e8beabe1df6a1e8688632be867baf7b78c6c782f", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1412.1898", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6c0fb88f6248cf4101b5d2d76489dde999e373cc", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
211081381
pes2o/s2orc
v3-fos-license
SIRT1 inhibits chemoresistance and cancer stemness of gastric cancer by initiating an AMPK/FOXO3 positive feedback loop Chemotherapy is the standard care for patients with gastric cancer (GC); however, resistance to existing drugs has limited its success. The persistence of cancer stem cells (CSCs) is considered to be responsible for treatment failure. In this study, we demonstrated that SIRT1 expression was significantly downregulated in GC tissues, and that a low SIRT1 expression level indicated a poor prognosis in GC patients. We observed a suppressive role of SIRT1 in chemoresistance of GC both in vitro and in vivo. In addition, we found that SIRT1 eliminated CSC properties of GC cells. Mechanistically, SIRT1 exerted inhibitory activities on chemoresistance and CSC properties through FOXO3 and AMPK. Furthermore, a synergistic effect was revealed between FOXO3 and AMPK. AMPK promoted nuclear translocation of FOXO3 and enhanced its transcriptional activities. In addition, FOXO3 increased the expression level and activation of AMPKα by directly binding to its promoter and activating the transcription of AMPKα. Similar to SIRT1, low expression levels of p-AMPKα and FOXO3a are also related to the poor prognosis of GC patients. Moreover, we revealed a correlation between the expression levels of SIRT1, p-AMPKα, and FOXO3a. These findings indicated the importance of the SIRT1-AMPK/FOXO3 pathway in reversing chemoresistance and CSC properties of GC. Thus, exploring efficient strategies to activate the SIRT1-AMPK/FOXO3 pathway may lead to improving the survival of GC patients. Introduction Gastric cancer (GC) remains the most common cancer worldwide, and is responsible for 1,033,701 new cases, and an estimated 782,685 deaths occurred worldwide in 2018 1 . The high mortality rate is mainly attributed to a late diagnosis and the refractory nature of GC in response to chemotherapy. Despite the recent increase in therapeutic options, combination of 5-fluorouracil (5-FU) and cisplatin remains the generally accepted first-line chemotherapy for GC patients 2 . Due to the development of chemoresistance, the above-mentioned chemotherapy typically fails and thereby promotes GC recurrence in patients 2,3 . Thus, there is an urgent need to make a better understanding of chemoresistance to improve drug responses and develop novel therapeutic strategies. Tumors consist of heterogeneous cell populations, among which a subpopulation of cells is referred to as tumor-initiating cells. Tumor-initiating cells proliferate, differentiate, and produce all cell types found in a particular tumor; therefore, they are also named cancer stem cells (CSCs) 4 . Compelling evidence has emerged, indicating that the persistence of CSCs is responsible for treatment failure due to the enhanced chemoresistance 4,5 . Moreover, it has been recently reported that CSCs are enriched in response to chemotherapy, which further links CSCs with chemoresistance 6,7 . In addition to the concept of CSC as a defined entity, current data have suggested that CSC is a plastic state, in which epigenetic diversity plays an important role 8,9 . The plasticity of CSCs has motivated efforts to identify epigenetic targets to eliminate cancer stemness and improve chemotherapeutic responses. Sirtuin 1 (SIRT1) is the founding member of class III histone deacetylases. SIRT1 uses NAD + as a cofactor and the substrates of SIRT1 include histone and nonhistone proteins [10][11][12] . In addition, dysregulation of SIRT1 has been associated with the pathogenesis of neoplastic, metabolic, infectious, and neurodegenerative diseases 10 . Recent studies have correlated SIRT1 with the function of normal stem cells 13,14 . Nevertheless, the function of SIRT1 in cancer is context dependent. Moreover, the role of SIRT1 in GC chemoresistance, CSC properties, and chemotherapy-induced stemness is largely unknown. In this study, we demonstrated that downregulated expression of SIRT1 is related to a poor prognosis in GC patients. SIRT1 suppresses chemoresistance and CSC properties of GC through its targets FOXO3 and AMPK. In addition, we also revealed a positive feedback loop between FOXO3 and AMPK. A correlation between SIRT1, p-AMPKα, and FOXO3 was identified using clinical samples. Downregulated expression of SIRT1 is related to a poor prognosis of GC patients A tissue array was used to examine expression of SIRT1 by IHC staining. The results showed that SIRT1 protein expression was significantly downregulated in GC tissues ( Fig. 1a-c). Using univariate Cox regression analysis, we found that depth of tumor infiltration (p = 0.03), local lymph node metastasis (p < 0.001), clinical stage (pTNM status, p = 0.001), tumor grade (p = 0.044), and SIRT1 expression levels (p < 0.001) were significantly associated with the overall survival of GC patients. Furthermore, multivariate Cox regression analysis further confirmed that local lymph node metastasis (p = 0.022) and SIRT1 expression levels (p = 0.013) are independent predictors of the overall survival of GC patients (Supplementary Table S1). In addition, high SIRT1 expression levels were associated with good overall survival of GC patients (Fig. 1d). Consistently, the data from Kaplan-Meier plotter database (218878_s_at) also associated higher SIRT1 expression levels with better overall survival and first progression ( Supplementary Fig. S1a, b). Moreover, when the analysis was restricted to patients receiving a 5-FU-based treatment, the correlation between higher SIRT1 expression levels and a longer duration of overall survival (Fig. 1e) or a longer period before first progression (Fig. 1f) was significant. The correlation between SIRT1 expression levels and the prognosis of GC patients treated with a 5-FUbased regimen suggests that SIRT1 may be associated with the patient response to chemotherapy. SIRT1 inhibits chemoresistance of GC cells To evaluate the effects of SIRT1 on chemoresistance, stable lentivirus-infected GC cells were used. Cells stably transfected with the lentiviral expression vector of SIRT1 and the control vector were regarded as LV-S and LV-C, respectively. Cells stably transfected with lentiviral shRNA targeting SIRT1 and the negative control were regarded as LV-Si and LV-Ci, respectively. Upon treatment with cisplatin or 5-FU, GC cells overexpressing SIRT1 exhibited enhanced sensitivity. In contrast, silencing of SIRT1 facilitated resistance to cisplatin and 5-FU ( Fig. 2a; Supplementary Fig. S2a). To further evaluate cell proliferation after chemotherapy, colony-formation assays were performed. Forced expression of SIRT1 caused a significant reduction in foci numbers and sizes upon cisplatin treatment, while knockdown of SIRT1 caused the opposite effects ( Supplementary Fig. S2b, c). In addition, the effect of cisplatin on cell apoptosis was determined by flow cytometry. Upon cisplatin treatment, higher percentages of apoptotic cells were observed in GC cells overexpressing SIRT1 compared with the controls. In contrast, cells with SIRT1 knockdown showed less apoptosis compared with their controls (Fig. 2b, c). Consistently, the suppressive effect of SIRT1 on apoptosis upon cisplatin treatment was also validated by the protein expression levels of cleaved caspase-3 (Fig. 2d). Furthermore, we assessed the role of SIRT1 in modulating cisplatin resistance in vivo. Cells with SIRT1 overexpression showed increased sensitivity to cisplatin treatment, as indicated by reduced tumor sizes and increased TUNEL-labeled apoptotic cells. In contrast, cells with SIRT1 silencing showed increased resistance to cisplatin ( Fig. 2e-g). Taken together, our results indicated the suppressive role of SIRT1 in chemoresistance of GC cells. SIRT1 inhibits CSC properties of GC cells Because CSCs are considered to be responsible for chemoresistance, we examined whether SIRT1 is also involved in the maintenance of the CSC phenotype in GC. The results from mammosphere assays demonstrated that overexpression of SIRT1 markedly reduced the spheroid formation abilities of GC cells. Accordingly, SIRT1 knockdown was shown to enhance the spheroid formation abilities of GC cells (Fig. 3a, b). Consistently, the inhibitory role of SIRT1 in CSC phenotype was confirmed by soft agar colony-formation experiments. A significant decrease in foci numbers and sizes was observed in SIRT1-overexpressing GC cells, while silencing of SIRT1 showed the opposite effects (Supplementary Fig. S3a, b). Then, mRNA levels of the classic GC stem cell marker CD44 15 and levels of important transcription factors for stemness maintenance were analyzed, and were shown to be negatively regulated by SIRT1 ( Supplementary Fig. S3c-f, j). Moreover, percentages of CD44-positive cells decreased in GC cells with forced expression of SIRT1, but increased in GC cells with SIRT1 knockdown (Fig. 3e, f). In spheroids (obtained from mammosphere assays), which were considered to be formed by CSCs, the mRNA expression levels of CD44 and the abovementioned transcription factors increased, whereas the mRNA expression levels of SIRT1 decreased ( Fig. 3c; Supplementary Fig. S3g). Consistent with the in vitro results, data from in vivo limiting dilution assays showed that mice harboring SIRT1-overexpressing SGC-7901 cells showed impaired tumor-initiating ability, whereas mice harboring SIRT1-knockdown SGC-7901 cells exhibited accelerated tumor formation (Table 1). Recently, it has been reported that CSCs are enriched after chemotherapy 6,7 . We examined expression of CD44, transcription factors that are responsible for maintaining stemness, and SIRT1 in GC cells upon cisplatin treatment. The results showed that after cisplatin treatment, the levels of markers for CSCs were upregulated, while SIRT1 expression levels were downregulated ( Fig. 3d; Supplementary Fig. S3h, i). Upon treatment with cisplatin, Fig. 1 Downregulated expression of SIRT1 is related to a poor prognosis of GC patients. a Representative images from human GC (T) and corresponding para-carcinoma (P) tissues stained with SIRT1 (data from the tissue array). Upper panel: ×40, scale bars: 500 µm. Lower panel: ×400, scale bars: 50 µm. b SIRT1 expression in human GC (right) and corresponding para-carcinoma (left) tissues (tissue array, n = 117). c The IHC score (staining intensity × positive percentages) for SIRT1 staining in GC and corresponding para-carcinoma tissues (tissue array, mean ± SD, n = 117). **** represents p < 0.0001. d Analysis of the SIRT1 expression levels in relation to the overall survival of GC patients (tissue array, n = 90). e, f Analysis of the SIRT1 expression levels in relation to the overall survival (e) and first progression (f) of GC patients treated with a 5-FU-based regimen from the Kaplan-Meier plotter database (218878_s_at) (n = 153). CD44 + CSC populations were enriched regardless of SIRT1 expression levels, and were more abundant in GC cells with SIRT1 knockdown. Accordingly, after cisplatin treatment, GC cells overexpressing SIRT1 contained a smaller percentage of CSCs (Fig. 3e, f). Therefore, the above data suggested an inhibitory effect of SIRT1 on CSC properties of GC cells. AMPK and FOXO3 serve as targets of SIRT1 and mediate the function of SIRT1 in chemoresistance and CSC properties Next, we investigated targets that are responsible for the inhibitory role of SIRT1 in chemoresistance and CSC properties of GC. STRING database (v11.0) was analyzed to screen for genes that are correlated with SIRT1 and core stemness factors (CD44, OCT4, SOX2, NANOG, and c-MYC). And FOXO3 was identified as a potential functional partner ( Supplementary Fig. S4a). In accordance with the data from hematopoietic stem cells 16 , our luciferase assay results indicated that inhibition of SIRT1 suppressed the transcriptional activity of FOXO3 in GC cells ( Supplementary Fig. S4b). As expected, knockdown of FOXO3 partially increased spheroid formation in GC cells with forced SIRT1 expression (Fig. 4g, h; Supplementary Fig. S5a, b). This indicates that, in addition to FOXO3, there may be some other targets of SIRT1 that participate in this process. In STRING database, the central metabolic regulator AMP-activated protein kinase (AMPK) was shown to be associated with both SIRT1 and FOXO3 ( Supplementary Fig. S4c). Using three different GC cell lines, activation of AMPKα by SIRT1 was validated by 3). b, c The percentages of Annexin V-positive cells upon cisplatin (CDDP) treatment (1.5 µg/ml, 36 h) were examined by flow cytometry. Cells treated with NaCl were used as a control. Data are presented as mean ± SD (n = 3). d Western blot was used to analyze the expression levels of caspase-3 and cleaved caspase-3 after incubation with cisplatin for 48 h (10 µg/ml for AGS cells and 1.5 µg/ml for BGC-823 and SGC-7901 cells). e-g Stable SIRT1-overexpression/-knockdown SGC-7901 cells and the corresponding control cells were used for tumorigenesis assays. Cisplatin (5 mg/kg, every 5 days) or an equal volume of NaCl was intraperitoneally injected when the tumor volumes reached 100 mm 3 (eight mice in each group). After 3 weeks, the mice were euthanized, and the tumor nodules were harvested. Representative images of the tumor nodules (e) and TUNEL staining (f, ×200, scale bars: 100 µm) are shown. The tumor volumes were measured and shown in (g) (mean ± SD, n = 8). *** represents p < 0.001. Fig. 3 SIRT1 inhibits CSC properties of GC cells. a, b Mammosphere assays were performed to evaluate the cancer stemness of GC cells. Representative images are shown in (a). Data are presented as mean ± SD (n = 3). c, d Real-time PCR was performed to determine the mRNA expression levels of CD44 and SIRT1. The results from GC cells (regarded as primary) and mammospheres obtained from GC cells are shown in (c). The results from GC cells treated with cisplatin (CDDP, 10 µg/ml for AGS cells and 1.5 µg/ml for SGC-7901 cells, 48 h) or NaCl (Control) are shown in (d). Data are presented as mean ± SD (n = 3). e, f Percentages of CD44-positive cells were detected by flow cytometry. Stable SIRT1-overexpressing (e) or SIRT1-knockdown (f) GC cells treated with cisplatin (CDDP, 1.5 µg/ml, 48 h) or NaCl (Control) were used. Data are presented as mean ± SD (n = 3). ** represents p < 0.01, *** represents p < 0.001. examining its phosphorylation after pretreatment with a SIRT1 agonist or anti-agonist ( Supplementary Fig. S4d). Then we defined AMPKα substrates of SIRT1 in GC using mammosphere assays. We found that knockdown of AMPKα in SIRT1-overexpressing GC cells partially increased spheroid formation in GC cells with forced SIRT1 expression. However, when we doubly knocked down AMPKα and FOXO3, a complete reversal of inhibited spheroid formation was observed (Fig. 4g, h; Supplementary Fig. S5c, d). This result suggests that both AMPK and FOXO3 serve as targets of SIRT1 in CSC properties of GC and that there may be synergistic effects between these two targets. For drug response, the results from MTS assays showed that silencing either AMPKα or FOXO3 in SIRT1overexpressing GC cells partially reversed the chemosensitivity induced by SIRT1. However, when we doubly knocked down AMPKα and FOXO3, a complete reversal of the improved drug response was observed (Fig. 4a). In addition, data from flow cytometry indicated that deletion of either AMPKα or FOXO3 partially decreased the apoptotic cell populations in SIRT1-overexpressing GC cells treated with cisplatin. Nevertheless, the apoptotic percentage of SIRT1-overexpressing GC cells with AMPKα and FOXO3 double knockdown was comparable with that of the control (Fig. 4b, c). In the in vivo tumorigenicity assays, inhibitors were used to suppress the activities of AMPK and/or FOXO3 17 . Consistent with the in vitro findings, suppressing both AMPK and FOXO3 in SIRT1-overexpressing xenograft tumors substantially reversed the improved cisplatin sensitivity, as indicated by the tumor volumes and TUNEL staining (Fig. 4d-f). These findings indicated that both AMPK and FOXO3 are involved in SIRT1 inhibiting chemoresistance and CSC properties of GC cells, and suggested a synergistic effect between these two targets. Positive feedback between AMPK and FOXO3 Then, we explored whether there is a positive feedback loop between AMPK and FOXO3. For the regulation of FOXO3 by AMPK, we observed subcellular location and transcriptional activities of FOXO3. The results of immunofluorescence staining demonstrated that the AMPK activator enhanced nuclear accumulation of FOXO3a in GC cells. In contrast, the AMPK inhibitor promoted translocation of FOXO3a from the nucleus to the cytoplasm (Fig. 5a). Then, we examined transcriptional activities of FOXO3a. The data from luciferase assays showed increased transcriptional activities of FOXO3a in GC cells treated with the AMPK agonist, while the AMPK inhibitor demonstrated the opposite effects (Fig. 5b). Next, the role of FOXO3a in AMPK regulation was determined. In nematodes, results reported by Tullet et al. 18 showed that DAF-16, which is a homolog of mammal FOXO3, directly activates the expression of AMPKγ. However, related studies in mammals have not been performed. Functional AMPK is a heterotrimer consisting of a catalytic α-subunit, a scaffolding β-subunit and a regulatory γ-subunit. We knocked down expression of FOXO3a in GC cells and found that, unlike the condition in nematodes, the mRNA expression levels of AMPKα decreased significantly (Fig. 5c). We also evaluated expression of AMPKα and AMPKγ at protein levels, and the results demonstrated that expression levels of AMPKα, but not AMPKγ, were downregulated by FOXO3a interference. Moreover, phosphorylation of AMPKα was also downregulated by FOXO3a silencing (Fig. 5d). As a transcription factor, FOXO3 has been shown to bind to promoters of target genes and regulate their expression. Therefore, we analyzed promoter sequences of AMPKα and AMPKγ using JASPAR database. Three putative binding sites of FOXO3 were found in the promoter region of AMPKα, and one binding site of FOXO3 was found in the promoter region of AMPKγ (Fig. 5e). Next, we performed ChIP assays to determine the binding of FOXO3 on the promoters of AMPKα and AMPKγ. As shown in Fig. 5f, evident binding signals were detected in the second and third binding sites on the AMPKα promoter. Only a weak band was observed for the first binding site of FOXO3 on the AMPKα promoter. No binding signal was detected for the FOXO3-binding site on the AMPKγ promoter. To further determine whether the binding of FOXO3 on the AMPKα promoter has functional significance, we performed dual luciferase assays. The results revealed that FOXO3 inhibition decreased the luciferase activities driven by the AMPKα promoter. Deletion of the first binding site of FOXO3 did not affect the decrease of luciferase activities. Nevertheless, deletion of the second binding site of FOXO3 almost completely rescued the suppressive role of FOXO3a knockdown. Moreover, deletion of the third binding site of FOXO3 also played a role in the repressive effects of FOXO3 silencing Representative images of tumor nodules are shown in (d). The tumor volumes were measured and presented as mean ± SD (e). TUNEL staining of xenografts obtained from each group is shown (f, ×200, scale bars: 100 µm). g, h Mammosphere assays were performed to evaluate the cancer stemness of GC cells. Representative images are shown in (g). Data are presented as mean ± SD (n = 3). * represents p < 0.05, ** represents p < 0.01, *** represents p < 0.001, ns represents not significant. ( Fig. 5g). Our results indicated that both the second and the third binding sites of FOXO3 on the AMPKα promoter are necessary for the binding of FOXO3. Taken together, above results uncovered a positive feedback between AMPK and FOXO3. Correlation between SIRT1, p-AMPKα, and FOXO3 in clinical samples To determine the clinical significance of AMPK and FOXO3 in GC, we assessed their expression using the abovementioned tissue arrays. Because phosphorylated 3). c Real-time PCR was performed to determine the mRNA expression levels of the three subunits of AMPK. Data are presented as mean ± SD (n = 3). Fi: small interfering RNA targeting FOXO3a. Ni represents the negative control. d Western blot was performed to analyze the expression levels of AMPKα, p-AMPKα and AMPKγ. e The scheme of putative FOXO3-binding sites on the promoters of AMPKα and AMPKγ. f ChIP assays showed that FOXO3 directly interacts with the FOXO3-binding sites (mainly the second and the third putative binding sites) within the AMPKα promoter. No binding signal was detected on the AMPKγ promoter. g Luciferase activities of different AMPKα promoter constructs in GC cells treated with FOXO3a siRNAs. WT: luciferase reporter vector containing the primary AMPKα promoter, Mut-1, -2, -3: luciferase reporter vector containing the AMPKα promoter with deletion of the FOXO3-binding site 1, 2, 3, respectively. Data are presented as mean ± SD (n = 3). * represents p < 0.05, ** represents p < 0.01, *** represents p < 0.001. AMPKα is the active and functional form of AMPKα, p-AMPKα instead of AMPKα was examined. IHC staining showed downregulated expression of p-AMPKα and FOXO3a in GC tissues (Fig. 6a-c). In addition, the expression levels of p-AMPKα and FOXO3a in patients with low SIRT1 expression levels were significantly lower than those in patients with high SIRT1 expression levels. The correlation between expression levels of p-AMPKα and FOXO3a was also identified (Fig. 6d). Moreover, high expression levels of p-AMPKα and FOXO3a were correlated with good overall survival (Fig. 6e). Consistent with our findings, the data from the Kaplan-Meier plotter database (209799_s_at) also correlated high AMPKα expression levels with good overall survival (Supplementary Fig. S6a). Furthermore, in GC patients receiving 5-FU-based chemotherapy, high expression levels of AMPKα indicated good outcomes (Fig. 6f, g). Similar to the results of AMPKα, high expression levels of FOXO3a were correlated with a good prognosis in GC patients treated with a 5-FU-based regimen (204132_s_at) (Fig. 6h, i; Supplementary Fig. S6b-g). Moreover, using Cox regression analyses, we further confirmed that expression levels of FOXO3a (p < 0.001) are independent predictors of the overall survival of GC patients (Supplementary Table S1). Taken together, our findings indicate that the SIRT1-AMPK/FOXO3 signaling pathway inhibits chemoresistance and CSC properties in GC (Fig. 6j). Discussion Changes in SIRT1 expression levels are frequent molecular events in human cancers 11,[19][20][21] . Recent data have shown low expression levels of SIRT1 in human colon cancer, lung cancer, and glioblastoma. Moreover, high expression levels of SIRT1 indicate a good prognosis in patients. Our results showed that SIRT1 expression was downregulated in human GC tissues. High expression levels of SIRT1 indicate good outcomes in GC patients. Data from the Kaplan-Meier plotter database also associate high expression levels of SIRT1 with a good prognosis in GC. Interestingly, for GC patients treated with a 5-FU-based regimen, high expression levels of SIRT1 also indicated a good prognosis. These results suggest the tumor-suppressive role of SIRT1 in GC and associate SIRT1 with the response to chemotherapy. Currently, cisplatin-and 5-FU-based chemotherapy is the standard care for GC 2,22,23 . The consequent chemoresistance to the abovementioned treatment leads to unsatisfactory survival of GC patients. Activators of SIRT1 have been evaluated in preclinical studies and were shown to be a promising therapeutic strategy for glioblastoma and multiple myeloma 21,24 . Furthermore, in lung and pancreatic cancer, activation of SIRT1 has been shown to enhance cancer cell sensitivity to classic chemotherapy 25,26 . In this study, we provided further evidence showing that overexpression of SIRT1 improves chemotherapeutic effects in GC cells. With forced expression of SIRT1, GC cells showed a decrease in the IC50 of cisplatin and 5-FU, an increase in apoptosis upon cisplatin treatment and enhanced sensitivity to cisplatin in xenografted mice. Moreover, SIRT1 exerted inhibitory effects on CSC properties of GC. The mechanism for the suppressive role of SIRT1 in chemoresistance and CSC properties was further explored. In Caenorhabditis elegans, the ability of Sir2, an NAD + -dependent deacetylase, to extend life span relies on presence of Daf-16, the FOXO transcription factor 27 . In addition to improving longevity, the SIRT1-FOXO axis was also found to play a role in alleviating insulin resistance and regulating glucose metabolism 28 . SIRT1 protects against emphysema through FOXO3-mediated reduction of cellular senescence 29 . In mouse aged kidney, the SIRT1-FOXO3 pathway improved cellular adaption to hypoxia by inducing mitochondrial autophagy 30 . In GC, FOXO3 has been shown to be expressed at low levels and exerts antitumor effects 31,32 . The results of this study demonstrated that deletion of FOXO3a reverses the effects induced by SIRT1 overexpression in GC cells. However, the activity of SIRT1 on drug resistance and CSC properties of GC cells is only partially reversed by FOXO3 knockdown, suggesting that other targets of SIRT1 also participate in this process. AMPK, which acts as a conserved energy sensor, is the target of SIRT1 for regulation of cellular metabolism. Briefly, SIRT1 deacetylates LKB1, and then LKB1 is translocated from the nucleus to the cytoplasm, forms an active complex and activates AMPKα 25,33 . As metabolic sensors, SIRT1/AMPK signaling was shown to play important role in metabolism 33 . Recently, AMPK activated by SIRT1 is proven to act as a tumor suppressor in multiple solid tumors by inducing cell death, inhibiting cell migration, and attenuating hypoxia-induced chemoresistance 21,25,[34][35][36] . Our data demonstrated that silencing AMPKα partially reverses inhibitory role of SIRT1 in GC cells. As AMPK was shown to be correlated with both SIRT1 and FOXO3 in STRING database, we hypothesized that both AMPK and FOXO3 participate in the inhibitory effects of SIRT1. Subsequent double knockdown of AMPKα and FOXO3a showed a complete reversal of the effects of SIRT1 on chemoresistance. These findings were further confirmed in a mouse model. Furthermore, the results in this study demonstrated that SIRT1 exerts inhibitory effects on CSC properties through AMPK and FOXO3. Next, the potential synergistic effects between AMPK and FOXO3 were explored. It has been reported that AMPK can phosphorylate FOXO3 37 . FOXO3 phosphorylated by AMPK translocates from the cytoplasm to the nucleus with enhanced transcriptional activities 38,39 . Using immunofluorescence staining and luciferase assays, we confirmed the above effects in GC cells. In terms of FOXO3 regulation of AMPK, Tullet et al. 18 demonstrated that FOXO directly activates the expression of AMPKγ and thus plays an important role in aging in Caenorhabditis elegans. AMPKβ expression is also upregulated by FOXO, but has no effect on life span. Nevertheless, no direct regulation of AMPK by FOXO3 has been identified in mammals 40,41 . Our results demonstrated that in GC cells, FOXO3a positively regulates the expression of the α-subunit of AMPK, but not the βor γ-subunit. Direct and functional binding of FOXO3a on the promoter of AMPKα was indicated by ChIP assays and luciferase assays. In addition to expression levels, phosphorylated AMPKα, which is the active form of AMPKα, was also downregulated by FOXO3a interference. Our findings indicated that in GC cells, FOXO3 may promote AMPKα expression and activation. Thus, a positive feedback loop between AMPK and FOXO3 is identified in GC cells. In summary, our results showed that low SIRT1 expression levels indicate a poor prognosis of GC patients. SIRT1 exerts inhibitory effects on drug responses and CSC properties of GC cells by regulating the positive feedback between AMPK and FOXO3. Similar to SIRT1, low expression levels of p-AMPKα and FOXO3a are identified in GC tissues and are related to a poor prognosis of GC patients. In addition, correlations between SIRT1, p-AMPKα, and FOXO3a were shown using human GC samples. These findings indicate the importance of the SIRT1-AMPK/FOXO3 pathway in rescuing chemoresistance and cancer stemness of GC. Thus, development of efficient strategies to activate the SIRT1-AMPK/FOXO3 pathway may eventually lead to improving the survival of GC patients. Cells and siRNAs Human GC cell lines AGS, BGC-823, and SGC-7901 (Cell Resource Center, Institute of Biochemistry and Cell Biology at the Chinese Academy of Sciences, Shanghai, China) were cultured in F12 (AGS cells) or RPMI 1640 (BGC-823 and SGC-7901 cells) containing 10% FBS, 100 units/ml penicillin, and 100 µg/ml streptomycin. The cell bank routinely performs cell line authentication by short tandem repeat profiling, and all of the cell lines were passaged in our lab for no more than 6 months after receipt. Stable lentivirus-infected GC cells were constructed and maintained as previously described 42 . Mycoplasma PCR testing was performed every month (GeneCopoeia, Rockville, MD, USA). Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA) was used to transfect small interfering RNAs (siRNAs) (GenePharma, Shanghai, China). The sequences of the siRNAs are shown in Supplementary Table S2. Colony-formation assay Cells were pretreated with cisplatin and then seeded into six-well plates and incubated for 10 days. The number of colonies was counted as previously described 42 . Flow cytometry For apoptosis analysis, cells were analyzed with PE Annexin V Apoptosis Detection Kit I (BD Biosciences, San Jose, CA, USA). For examination of CD44 expression, cells were stained with PE Mouse Anti-Human CD44 (#555479, BD Biosciences). Samples were examined by flow cytometry (CytoFLEX, Beckman Coulter), and the data were analyzed using CytExpert software (Beckman Coulter). Soft agar colony-formation assay Cells were suspended in complete medium with 0.3% agar (upper agar layer) and added to a 12-well plate precoated with complete medium containing 0.6% agar (lower ager layer). Complete medium was added to the surface of the upper agar layer, and was changed every 3 days. After 15-20 days, colonies > 50 µm were counted under a microscope. RNA extraction and quantitative real-time PCR (qRT-PCR) The total RNA was extracted with TRIzol (Invitrogen) and converted into cDNA, which was amplified by qRT-PCR as previously described 42 . The primer sequences are shown in Supplementary Table S2. Luciferase assay Luciferase reporter plasmids containing the promoter sequence of AMPKα, assumed FOXO3a binding sites deleted promoter sequences of AMPKα were constructed by Bioasia (Jinan, China). FHRE-Luc (#1789, Addgene) was a luciferase construct containing three copies of forkhead response elements 43 . The relative luciferase activities were measured and calculated as previously described 44 . Chromatin immunoprecipitation (ChIP) ChIP assays were performed as previously described 44 with anti-FOXO3a-ChIP Grade (ab12162, Abcam). Coprecipitated DNA served as the template for amplification of the AMPKα and AMPKγ promoters. The primer sequences are shown in Supplementary Table S2. Immunofluorescence staining Cells were fixed in 4% fixative solution and permeabilized with 0.2% Triton X-100. After blocking, the cells were incubated with primary antibodies against FOXO3a (#12829, Cell Signaling), and then a fluorescent secondary antibody. Nuclei were stained with DAPI (Beyotime). Images were obtained under a microscope (Olympus, Tokyo, Japan) using CellSens Dimension software. Xenograft tumor model The animal study was approved by the Ethical Committee of School of Basic Medical Sciences, Shandong University (ECSBMSSDU2019-2-010). Male BALB/cnude mice (6 weeks old) were purchased from Charles River (Beijing, China). Stably lentivirus-infected SGC-7901 cells (5 × 10 6 ) were subcutaneously injected into each mouse. When the tumor volumes (L × W 2 /2) reached 100 mm 3 (regarded as day 0), the mice were injected with cisplatin or NaCl every 5 days. Measurements of the tumor volume were performed every week. On day 21, the mice were killed, and the tumor xenografts were removed, and fixed in 10% buffered formalin for TUNEL staining. The investigators were not blinded to the mice group during experiments. For the recovery experiment, nude mice were randomly divided into five groups. In detail, group I, mice were injected with LV-C clones (regarded as LV-C + CDDP); group II, mice were injected with LV-S clones (regarded as LV-S + CDDP); group III, mice were injected with LV-S clones (regarded as LV-S + A-In + CDDP); group IV, mice were injected with LV-S clones (regarded as LV-S + F-In + CDDP) and group V, mice were injected with LV-S clones (regarded as LV-S + D-In + CDDP). When the tumor volumes reached 100 mm 3 , the mice in group III, IV, and V received inhibitors of AMPK (Compound C, Selleckchem, Houston, TX, USA, dissolved in NaCl, 20 mg/kg, i.p.), FOXO3 (AS1842856, Biochempartner, Shanghai, China, dissolved in 6% cyclodextrin, 100 mg/kg, p.o.) and both the inhibitors, respectively. The mice in groups I and II received NaCl and 6% cyclodextrin as controls. The following day was regarded as day 0. Then the mice were treated with cisplatin and killed as abovementioned. TUNEL staining TUNEL staining for the analysis of apoptosis was performed using In Situ Cell Death Detection Kit AP and NBT/BCIP (Roche Applied Science, Basel, Switzerland). Images were obtained under a microscope (Olympus) using CellSens Dimension software. Statistical analysis Comparisons between different groups were analyzed using Student's t test or one-way ANOVA. Survival curves were plotted using the Kaplan-Meier method and compared using the log-rank (Mantel-Cox) test. Survival data were determined by univariate and multivariate Cox regression analyses. The correlation between SIRT1, p-AMPKα, and FOXO3a was analyzed by Spearman correlation. Statistical analysis was performed using Graph-Pad Prism 6 and SPSS (version 20.0). The level of statistical significance was set at p < 0.05.
2020-02-12T16:08:27.245Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "c8b52cf18930204bbfbb47b0b3dd3f89553572ae", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41419-020-2308-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8b52cf18930204bbfbb47b0b3dd3f89553572ae", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
34514561
pes2o/s2orc
v3-fos-license
Meta-analysis study to evaluate the association of MTHFR C677T polymorphism with risk of ischemic stroke Ischemic stroke is a condition characterized by reduced blood supply to part of the brain, initiating the ischemic cascade, leading to dysfunction of the brain tissue in that area. It is one of the leading causes of death and disability and is estimated to cause around 5.7 million deaths worldwide. Methyl tetra hydro-folate reductase (MTHFR) is a rate limiting enzyme in the methyl cycle which catalyzes the only biochemical reaction which produces 5, Methyl tetra hydro folate, the co-substrate for the re-methylation of homocystiene to produce methionine. MTFHR C677T is a common mutation of MTHFR and those homozygous for the MTFHR C677T produce a thermo-labile form of the protein with drastically reduced catalytic activity resulting in elevated plasma homocystiene levels - a common risk factor for cardiovascular diseases. However, the role of MTHFR C677T in ischemic stroke remains unclear. To evaluate this association, we carried out a meta-analysis of existing published studies, which included 72 studies involving 12390 cases and 16274 controls. The forest plot was made to evaluate the overall risk of the mutation in the etiology of Ischemic Stroke. The overall Odds- ratio of the study was found to be 1.319 for random effects model, revealing a ∼32% increased risk of Ischemic stroke in the presence of MTHFR C667T mutation compared to controls. Publication bias in the study was analyzed using funnel plot which revealed that only 7 studies out of the 72 contributed to publication bias. These 7 studies were excluded and Meta-analysis was repeated for 65 studies and overall odds-ratio was 1.306, which showed that there was a 30% higher risk of Ischemic stroke in the presence of MTHFR C667T. Background: Stroke is a clinical condition characterized by poor blood flow to the brain resulting in cell death. It is of 2 major types: ischemic, due to lack of blood flow, and hemorrhagic, due to bleeding. Ischemic stroke is a clinical condition characterized by reduced blood supply to part of the brain, initiating the ischemic cascade, leading to dysfunction of the brain tissue in that area. The reduced blood flow can be caused by Thrombosis, Embolism, Systemic hypo perfusion or venous thrombosis. [1] Stroke is one of the leading causes of death and disability in India and world over. In 2005, ischemic stroke is estimated to cause around 5.7 million deaths worldwide and 87% of these deaths were in lowincome and middle-income countries. [2] The estimate adjusted prevalence rate of stroke range, 84-262/100,000 in rural and 334-424/100,000 in urban areas. The incidence rate is 119-145/100,000 based on the recent population based studies. [3] Methyl tetra hydro folate reductase (MTHFR) gene which is located on chromosome 1 (1p36.3) encodes a 77 kDa dimeric protein of the same name, which is a rate-limiting enzyme in the methyl cycle. [4] It catalyzes the only biochemical reaction, which produces 5, Methyltetrahydrofolate, the co-substrate for the remethylation of homocystiene to produce methionine. [5] MTFHR C677T (C → T substitution at bp 677) is a common mutation of MTHFR causing an Alanine to Valine substitution at the 222 nd position in the encoded protein product. People who are homozygous for the MTFHR C677T mutation produce a thermolabile form of the protein with drastically reduced catalytic activity resulting in elevated homocystiene levels in the plasma. [6, 7] Even a modest increase in plasma homocysteine has been known to be risk factor for cardiovascular diseases. [8,9] However, its role in stroke remains unclear. Although most case-control studies suggest a positive association between elevated plasma homocysteine and stroke, nested case-control studies to establish such an association are rare and are limited by the availability of previous studies. [10] Meta-analysis is a powerful statistical technique involving analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings. [11,12] It is a quantitative and formal epidemiological study design used to systematically assess the results from previous research to derive conclusions about that body of research. [13] We performed an updated systematic review and cumulative meta-analysis of available data and quantify the stroke risk associated with the 677T allele with a sufficient sample size to address these power limitations. All published manuscripts including letters, previous metaanalyses, and abstracts were searched. The retrieved studies were examined thoroughly to assess their appropriateness for inclusion. The search results were limited to human. All languages were searched initially, but only articles in English language were selected. The references of all computer-identified publications were searched for any additional studies, and the MEDLINE option-related articles were used for all the relevant articles. Inclusion and Exclusion Criteria: Studies were included if: (a) study design was case-control and (b) had confirmed diagnosis of ischemic stroke with magnetic resonance imaging (MRI) or computed tomography (CT). A standardized data collection form was used for data extraction; this form mainly included the following content: (i) Name of the first author, year of publication, country, and racial descent; (ii) Demographics, number of cases and controls, and source of cases and controls; (iii) Distribution of genotypes and alleles; (iv) Hardy-Weinberg equilibrium Meta-Analysis: The Comprehensive Meta-Analysis Version 3.0 performed all statistical analyses. Two-sided p values less than 0.05 were considered statistically significant. For the control groups for each study, the observed genotype frequencies of the MTHFR C677T polymorphism were evaluated for Hardy-Weinberg equilibrium. The strength of the association between the MTHFR C677T polymorphism and Ischemic Stroke was assessed by the odds ratios (ORs) with 95% CIs. The pooled ORs of Patients with Ischemic Stroke vs. Healthy controls were calculated for the dominant model (CT + TT vs. CC). The evaluation of the metaanalysis results included a test for heterogeneity, an analysis of the sensitivity, and an examination for publication bias. Considering possible heterogeneity between studies, I 2 metric were conducted, p < 0.10 and I 2 > 50% were considered to indicate the existence of significant heterogeneity. [14] If the heterogeneity test result returned p > 0.1, the pooled ORs were analyzed using the random-effects model [15], or else, the fixed effects model was used. [16] Sensitivity analyses were also performed after sequential removal of each study. Lastly, Begg's funnel plot and Egger's test were used to examine statistically any publication bias [17,18]. The overall methodology is depicted in Figure 1 Results & Discussion: The current study investigated the association between risk of ischemic stroke and MTHFR C677T polymorphism. Study revealed that the presence of MTHFR C677T significantly increases the risk of ischemic stroke. Numerous studies have been carried out across the globe to determine the association between MTHFR C677T polymorphism and ischemic stroke; however, the association remains inconclusive. With an aim to accurately quantify this association, we carried out a meta-analysis of existing published studies, which included 72 studies involving 12390 cases and 16274 controls (Shown in Table 1). The forest plot was made to evaluate the overall risk of the mutation in the etiology of Ischemic Stroke. The overall Odds-ratio of the study was found to be 1.276 for fixed effect model and 1.319 for random effects model, which showed that there was a ~32% increased risk of Ischemic stroke in the presence of MTHFR C667T mutation compared to controls. The Forest plot is depicted in Figure 2. The findings from the current meta-analysis study are in agreement with both the previously carried out meta-analyses on evaluating association between risk of ischemic stroke and MTHFR C677T polymorphism. Publication bias in the study was analyzed using funnel plot, which revealed that only 7 studies out of the 72 contributed to publication bias. These 7 studies were excluded and Metaanalysis was repeated for 65 studies. The overall odds-ratio was found to be 1.306 that showed that there was a 30% higher risk of Ischemic stroke in the presence of MTHFR C667T mutation compared to controls. Conclusion: This current study is the largest meta-analysis consisting of 72 studies, carried out to evaluate the association between MTHFR C677T polymorphism and the risk of ischemic stroke. The study done showed that the polymorphism significantly increased (~30%) the risk of ischemic stroke. The study further suggests the importance of MTHFR genotyping for identifying patients susceptible for risk of ischemic stroke and for preventing and managing stroke cases. The study findings have a clear implication on health policy makers to enable increased intake of Levomefolic acid to reduce the risk of ischemic stroke. Bigger prospective studies with correction for multiple comparisons are essential for further validating the study findings.
2018-04-03T04:07:31.059Z
2017-06-30T00:00:00.000
{ "year": 2017, "sha1": "a3402004101ccf173ab2637c5f85d23ce362c482", "oa_license": "CCBY", "oa_url": "http://www.bioinformation.net/013/97320630013214.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3beee07e7d49734bd7307334d39fae8e8296d48b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
247242731
pes2o/s2orc
v3-fos-license
Sleep habits and the relationship thereof with mental health indicators in childhood This study investigated the relationship between sleep habits and mental health indicators in childhood reported by caregivers, in addition to seeking evidence of validity and reliability of the Children Sleep Habits Questionnaire (CSHQ), Brazilian version (CSHQ-BR). Sixty children participated, between 4 and 10 years old, from a public school in the central region of São Paulo, in 2019. The overall mean of the CSHQ-BR score was 49.08. There were no differences between sex in the CSHQ and the Strengths and Difficulties Questionnaire (SDQ) scores. Positive and significant correlations were observed between the CSHQ and the SDQ, specifically between difficulties, parasomnias, sleep-disordered breathing with emotional problems, and hyperactivity. Sleep problems explain 23% of the variance of the SDQ scores. The Cronbach’s Alpha Coefficient was 0.75, indicating adequate internal consistency. These findings point to evidence of the validity and accuracy of the CSHQ-BR. Sleep habits are associated with indicators of emotional and behavioral problems. 4 Sleep is a complex and fundamental phenomenon for healthy development.It interferes with behavior, memory consolidation, and other cognitive, metabolic, hormonal, and emotional regulation aspects.When inadequate, it is a risk factor for obesity, diabetes, heart disease, immunity, and mood disorders, interfering with the individual's physical and mental well-being, causing functional impairment and losses in interpersonal relationships (Barbisan, & Bueno, 2019;Neves et al., 2017;Silva et al., 2014). Sleep difficulties are common in childhood, affecting about 20-30% of children having typical development and 80% of children with neurodevelopmental disorders (Damiani et al., 2014;Moore et al., 2017).Anxiety and depression, attention deficit hyperactivity disorder (ADHD), and autism spectrum disorder (ASD) often occur in association with or as a comorbidity with insomnia (Nunes & Bruni, 2015;Vaughn et al., 2015;Liu et al., 2014).It is estimated, for example, that 25% to 50% of children with ADHD have sleep disorders (Miano et al., 2012;Paavonen et al., 2009). Studies from different countries show that the association between sleep disorders and emotional/behavioral problems in children appears to be universal (Nunes & Bruni, 2015;Wu et al., 2016;Wang et al., 2020;Whalen et al., 2016).Aspects related to short-duration sleep or bedtime irregularities are associated with attention problems, aggressive behaviors, hyperactivity, and emotional symptoms (Wu et al., 2016;Wang et al., 2020;Whalen et al., 2016;Schlarb et al., 2016).Furthermore, children with sleep disorders have greater non-adaptive changes in the process of generating and regulating emotions, more difficulties in the relationships with their peers, and poorer readiness for learning with an impact on their school performance (Tso et al., 2016). Research indicates that sleep disorders in children are significant predictors of later emotional and behavioral problems, such as anxiety, depression, somatic complaints, attention problems, poorly developed executive functions, and aggressive behaviors (Nelson et al., 2018;Whalen et al., 2016;Nunes & Bruni, 2015;Owens & Mindell, 2011). The clinical presentations of sleep disorders are variable.During the first years, complaints of difficulty falling asleep and/or nocturnal awakenings are frequent.After this period, parasomnias (such as confusional arousals, sleepwalking, and night terrors) and sleep-disordered breathing (such as obstructive sleep apnea) are observed.From preschool age, disorders related to issues of inadequate sleep hygiene occur and, in adolescence, it is possible to observe disorders related to circadian issues or excessive movement during sleep (Nunes & Bruni, 2015). A recent study investigating sleep and mental health problems in preschool children in China and Japan showed an association between emotional and behavioral problems and sleep disorders.However, there was a difference in mental health and sleep aspects across countries.For instance, Chinese children with sleep difficulties have more relationship problems with their peers.In China, the most frequent sleep disorders are respiratory problems and daytime sleepiness, whereas, in Japan, those are sleep anxiety and nocturnal awakening (Wang et al., 2020).This indicates that cultural differences can have different impacts on the relationship between specific sleep characteristics and mental health indicators in childhood. Early detection of sleep disorders and risk factors for psychiatric disorders is essential, as it provides for adequate management and, consequently, a favorable prognosis, thereby avoiding the chronicity of the diseases involved.The use of screening instruments, such as the Children's Sleep Habits Questionnaire (CSHQ) and the Strength and Difficulties Questionnaire (SDQ), can help in daily practice, since they are quick and easy to administer. The CSHQ is a questionnaire on sleep habits developed by Owens et al. (2000), widely used in international studies on childhood sleep.The questionnaire's items and structure are based on common clinical presentations of the most prevalent diagnoses according to the International Classification of Sleep Disorders and assesses parents' perception of their children's sleep. Regarding the screening of mental health problems in childhood, there is the SDQ, which consists of a total of 25 items divided into five subscales.The Brazilian version of the SDQ presents previous national validity evidence studies for screening mental health in childhood (Saur & Loureiro, 2012;Silva et al., 2015).SDQ scores allow one to identify internalizing symptoms and externalizing behavior problems (Goodman, 1997). Given the association between sleep disorders and behavioral problems in children, as well as the scarcity of national studies that use screening instruments consolidated in the literature, this study aimed to investigate the relationship of sleep patterns with mental health indicators in childhood as reported by their caregivers.Furthermore, the data obtained with this study will provide validity evidence based on the relationship with external variables pertaining to the adaptation of the CSHQ to Brazilian Portuguese.According to the literature review, the hypothesis is that correlations of low to moderate magnitude are observed between sleep problems and an increase in mental health problems in children, as reported by their caregivers. Inclusion and Exclusion Criteria The inclusion criteria for caregivers (parents and/or guardians) were that they needed to stay with the child for at least six nights a week.Children whose caregivers reported genetic syndromes, psychiatric disorders, or continuous use of psychotropic drugs were excluded from the study. Instruments • CSHQ: In this study, an abbreviated version of the CSHQ was used.It had already been translated and was undergoing the validity evidence process for Brazilian Portuguese (Gios, 2020).This questionnaire includes items related to key domains covering the main clinical sleep complaints in the pediatric age group.The iors is rated on a three-point scale, namely: "usually" (occurs five to seven times a week), "sometimes" (occurs two to four times a week), or "rarely" (zero or once a week) (Owens et al., 2000).The BR-CSHQ is described in Appendix 1. • Strengths and Difficulties Questionnaire (SDQ): An instrument used to screen for mental health problems in childhood.It comprises 25 items divided into five subscales: prosocial behavior problems, hyperactivity, emotional problems, con-duct problems, and relationship problems, with five items in each subscale.The responses can be: false (zero points); more or less true (one point); or true (two points); and each item is given a specific score.The exception occurs in the prosocial behavior subscale, in which the higher the score, the smaller the number of complaints.For each of the five SDQ subscales, the score can range from 0 to 10, with the total difficulty score being generated by the sum of the results of all subscales, except for sociability, ranging from 0 to 40 points (Goodman, 1997). Procedures and Ethical Considerations The study was approved by the Ethics Committee at Irmandade da Santa Casa de The scores of boys and girls in SDQ and CSHQ were compared using the Mann-Whitney test.A Spearman correlation analysis was performed between the SDQ and CSHQ scores and then, finally, also a linear regression analysis was performed considering sleep disorders as an independent variable and behavior problems as an outcome.The instrument's precision for this sample was obtained by an internal consistency analysis, more specifically, by Cronbach's Alpha Coefficient. Results Initially, descriptive analyses were performed by the participants' sex and then for the total sample considering SDQ and CSHQ scores.The results are described in Table 1. 2. indicating that the lower the socioeconomic status, the greater the resistance to going to bed. With respect to age, a negative, significant, moderate correlation was found between hours of sleep and age in months, indicating that the younger the child, the more hours of sleep.An increase in age, in turn, is related to a decrease in the amount of sleep.A positive, significant, low-magnitude correlation was also found between sleep onset and age, i.e., the older the age, the later the sleep onset.Also, in relation to age, a trend towards a positive correlation with daytime sleepiness was observed. A positive, significant, moderate correlation was observed between the total CSHQ score and the total SDQ score.In the analysis of the specific domains of the CSHQ, it was noted that sleep duration and the presence of parasomnias and respiratory disorders correlated positively, significantly, and with a moderate magnitude with the total SDQ scores. Concerning the specific domains of SDQ, no significant correlations were observed between the different aspects of sleep and conduct problems, relationships with peers, and prosocial behavior.In the case of problems in the relationship with peers, there was a low-correlation trend with parasomnias.On the other hand, a positive, significant, low-magnitude correlation was observed between symptoms of inattention/hyperactivity with sleep onset, parasomnias, and sleep-disordered breathing.It is noted that longer time for sleep onset and the presence of parasomnias or respiratory disorders are related to a higher frequency of inattention symptoms/hyperactivity.These scores also tended to have a low correlation with the total CSHQ scores. A positive, significant, moderate correlation was found between emotional symptoms and parasomnias together with sleep-disordered breathing.So, it is observed that the more symptoms of parasomnias and sleep-disordered breathing the child presents, the more emotional difficulties are reported by caregivers. Regression analysis allowed the identification of a model with a Determination Coefficient of 0.232.Thus, it was observed that the sleep characteristics as measured by the CSHQ explain 23% of the score variance observed in behavioral problems that are indicators of health in children between 4 and 10 years of age.Table 3 displays the model resulting from the regression.Finally, this study is the first one conducted with the Brazilian Portuguese version of the CSHQ (CSHQ-BR), which was previously translated and adapted by Gios (2020).Although previous studies with the European Portuguese version, i.e., PT-CSHQ (Silva et al., 2014), were also conducted in Brazil (Loekmanwidjaja et al., 2018;Urrutia-Pereira et al., 2017), in this study, we chose to use the cross-culturally adapted version for Brazil, in view of the need to use instruments properly adapted to the context in which it is administered (Borsa et al., 2012).An analysis of the instrument's internal consistency was conducted for the present sample, which indicated a Cronbach's Alpha Coefficient of 0.75.This value is considered adequate, indicating a good item-total correlation of the items that make up the instrument. Discussion Considering previous studies with different populations around the world, which report the association between sleep and mental health in childhood (Wu et al., 2016;Liu et al., 2014;Wang et al., 2020) and the scarcity of available questionnaires in a national context for the assessment of sleep characteristics, this study aimed to investigate the relationship between characteristics serving as indicators of sleep problems and a screening measure for internalizing and externalizing symptoms used as a triage instrument for mental health in children.Also, the screening instruments used are widely employed in several national and international studies, which contributes to the first validity evidence and accuracy findings to the Brazilian version of the CSHQ. According to a classification previously established for SDQ scores (Goodman, 1997), on average, children were considered borderline for emotional symptoms, i.e., having more symptoms as reported by caregivers than the expected for this age group.On the other hand, the scores were within the normal range for conduct problems, hyperactivity, difficulties with peers, and prosocial behavior.In the case of CSHQ, there is still no established cutoff point for the Brazilian population.But when comparing the data obtained in this sample with the 41-point criterion used in international studies (Owens, 2000), an average of eight points above the expected was observed, albeit with a standard deviation of 11 points, the lower limit of which would be compatible with the cutoff score used.In this sense, future studies with representative samples of the Brazilian population should establish a cutoff point that would be more compatible with the sleep characteristics found in the country. Initially, the results were compared between boys and girls to investigate possible differences in scores that could impact the relationship between mental health indicators and sleep problems.The results showed no statistically significant differences between the scores of boys and girls either in the total score or in the subscales of the CSHQ and SDQ, therefore corroborating a previous study conducted by Lianqi et al. (2004) with Chinese children.Correlation analyses with the CSHQ-BR were then performed considering the sample as a whole. It is known that sleep problems can have different characteristics (Nelson et al., 2018;Nunes & Bruni, 2015), in the same manner as there are different indicators of mental health problems (Goodman, 1997).For this reason, we chose to correlate the questionnaires' subscales beyond their total scores. In this study, a positive relationship was found between the total CSHQ score (indicating greater sleep difficulties) and emotional problems in childhood, in addition to a specific association between parasomnias together with sleep-disordered breathing and emotional difficulties.The results show that an increase in characteristics indicating sleep problems is associated with higher indicators of mental health impairment, as reported by family members.Emotional symptoms are known to be cardinal in many mood disorders.Still, a recent study discusses specific and relatively common sleep difficulties, such as difficulty sleeping alone and increased sleep onset latency as predictors of depression and severity of anxiety symptoms over time (Whalen et al., 2016).Some studies report that emotional symptoms and sleep disorders in children may be related to the motherhood type, i.e., the more present the mother is when raising her child, the fewer the sleep disorders and, consequently, fewer emotional problems.Preschool children experiencing insecurity and ambivalence in their attachment relationships may experience impaired sleep quality (Schlarb et al., 2016).In this sense, future studies could investigate whether the relationship between sleep problems and emotional symptoms is mediated or moderated by characteristics pertaining to the child's attachment to their caregiver. It was observed that the later the sleep onset, the higher the score on the SDQ hyperactivity subscale.Also, parasomnias and sleep-disordered breathing are also positively associated with hyperactivity.This indicates that these sleep-specific variables are related to an increase in general indicators of mental health problems.These results replicated those obtained in previous studies, indicating a universal association between sleep disorders and behavioral problems in children from different countries (Li et al., 2008;Nelson et al., 2018), specifically an association with externalizing problems, such as hyperactivity (Wang et al., 2020), for instance.These authors found lower correlations between symptoms of hyperactivity and parasomnias, respiratory disorders, and sleep onset when compared to the correlations observed in this study.Such variations in correlation magnitude may be due to sample size, age, and cultural aspects that influence both the children's sleep and mental health.It is also noteworthy that children aged 7-8 years old with ADHD, i.e., having a consistent functional impairment resulting from symptoms of hyperactivity, sleep fewer hours when compared to those who do not have that disorder (Paavonen et al., 2009). Regarding age, the findings of this study also indicated that the younger the child, the more hours of sleep are reported by their caregivers.It was noted that the older the child, the longer the sleep onset time, which thus decreases the child's number of sleep hours.Among the factors that may justify a late sleep onset there is the use of electronic devices, watching television before sleeping, which lead to greater exposure to screen light, thereby impairing sleep hygiene.One hypothesis would be that, with increasing age, caregivers have less control over the nighttime use of these devices.It is known that the current lifestyle leads to social changes, having a negative impact on sleep patterns, such as nocturnal awakenings, fewer hours of sleep, and, in some cases, insomnia (Hoge et al., 2017).There was also a trend towards a correlation between age and daytime sleepiness, a finding that has also been observed by other authors (Liu et al., 2019).This relationship can also be explained in terms of the fewer hours of sleep as age increases. In relation to SES, there was a greater correlation with resistance to going to bed, indicating that the lower the SES, the greater the resistance.The relationship between lower SES and sleep-related problems has also been discussed, since shared rooms, family stress, and greater difficulty in establishing sleep limits and routines could lead to such losses (Crabtree et al., 2005;Li et al., 2008). Finally, in this present study, 23% of the variance in internalizing and externalizing mental health indicators (total SDQ score) was significantly explained by sleep characteristics, thus corroborating data published across a vast literature in the field reporting changes in the sleep pattern as a risk factor for and as a diagnostic criterion in mental health (Nelson et al., 2018;Wang et al., 2020). Hábitos de sono Quantidade habitual de sono por dia: _______horas e _______ minutos (levar em conta as horas de sono da noite e sonecas) 33 items are conceptually grouped into eight subscales, reflecting the following sleep domains: resistance to going to bed, sleep onset, sleep duration, sleep anxiety, nocturnal awakenings, parasomnias, sleep-disordered breathing, and daytime sleepiness.It is used to screen children between 4 and 10 years of age for sleep disorders.This age limit was defined with the aim of minimizing the possible effects of pubertal changes on sleep behavior.The frequency of sleep behav- Misericórdia de São Paulo.The assessment of sleep problems and the mental health screening were approved in two distinct individual projects(CAAE 99698918.1.0000.5479;CAAE 20689919.3.0000.5479).The person in charge of the institution where data collection took place, as well as those legally responsible for the children, signed a Voluntary Informed Consent Form (VICF), as recommended by Resolution n. 510/2016 of Brazil's National Health Council for research with human beings.Participants were informed about the procedures, confidentiality of the data to be collected, and anonymous disclosure of results.Data AnalysisData were analyzed using the software Statistical Package for Social Sciences (SPSS) -version 21.0.Descriptive statistics (mean and standard deviation) were employed in order to characterize the sample in terms of questionnaire scores.To verify the adequacy of data distribution and define the type of inferential analysis to be conducted, the Kolmogorov-Smirnov test was used, and the asymmetry and kurtosis values of the SDQ and CSHQ scores were described.Asymmetry values for the SDQ ranged around -1 and 1, with the exception of prosocial behavior; while kurtosis values diverged from those which were expected.In the CSHQ, some of the subscale values are asymmetric and far from the expected values.Finally, the Kolmogorov-Smirnov test showed statistical significance (p≤0.05)and, therefore, non-parametric analyses were conducted. • The Economic Classification Brazil Criterion, by the Brazilian Association of Research Companies (Critério de Classificação Econômica Brasil da Associação Brasileira de Empresas de Pesquisa; ABEP, 2019): It presents a classification of socioeconomic status by considering all the goods that are within a given household (irrespec- tive of the form of purchase), level of education, and housing conditions.It stratifies a given population into seven socioeconomic statuses (SES). Table 1 Descriptive statistics of SDQ and CSHQ scores CSHQ-BR (abbreviated version) Questionário de Hábitos de Sono em Crianças (CSHQ-BR) (versão abreviada/ pré-escolar e escolar) Sleep characterization was measured by using the shortened version of the CSHQ-BR, and that is a measure reported by caregivers.The combined use of objective sleep measures, such as actigraphy and polysomnography, would better detail sleep characteristics in this population.It is important that additional studies assess children's sleep and behavior through other informants, which would allow for a better understanding of the child's functioning in different contexts.The sample of this study was restricted to 60 children from one single public school in the city of Sao Paulo, Brazil.It is hoped that this work can further stimulate the conduction of national research on the subject, with larger samples that contemplates Brazil's population heterogeneity.On the other hand, an important contribution of this study stems from the findings, providing the first evidence of both validity and accuracy of the CSHQ adapted to Brazilian ................... in-depth clinical investigation into this matter, thereby contributing to early diagnosis and intervention and consequently modifying the prognosis and the disease course.Appendix 1.
2022-03-06T16:08:23.221Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "10efca065ff8ef5eca549fd1414e4527e8176c1d", "oa_license": "CCBY", "oa_url": "http://editorarevistas.mackenzie.br/index.php/ptp/article/download/13341/11352/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b9cbf64df021cf7755e72f1cf2d650eb19b7c534", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
9329781
pes2o/s2orc
v3-fos-license
Gas Dynamics and Star Formation in the Galaxy Pair NGC1512/1510 (abridged) Here we present HI line and 20-cm radio continuum data of the nearby galaxy pair NGC1512/1510 as obtained with the Australia Telescope Compact Array. These are complemented by GALEX UV-, SINGG Halpha- and Spitzer mid-infrared images, allowing us to compare the distribution and kinematics of the neutral atomic gas with the locations and ages of the stellar clusters within the system. For the barred, double-ring galaxy NGC1512 we find a very large HI disk, about 4x its optical diameter, with two pronounced spiral/tidal arms. Both its gas distribution and the distribution of the star-forming regions are affected by gravitational interaction with the neighbouring blue compact dwarf galaxy NGC1510. The two most distant HI clumps, at radii of about 80 kpc, show signs of star formation and are likely tidal dwarf galaxies. Star formation in the outer disk of NGC1512 is revealed by deep optical- and two-color ultraviolet images. Using the latter we determine the properties of about 200 stellar clusters and explore their correlation with dense HI clumps in the even larger 2XHI disk. The multi-wavelength analysis of the NGC1512/1510 system, which is probably in the first stages of a minor merger having started about 400 Myr ago, links stellar and gaseous galaxy properties on scales from one to 100 kpc. INTRODUCTION The Local Volume (LV), generally considered as the sphere of radius 10 Mpc centred on the Local Group, contains more than 500 galaxies. For the majority of these galaxies reliable distances are currently available (Karachentsev et al. 2004(Karachentsev et al. , 2008. Independent distances, such as those obtained from the luminosity of Cepheids, the tip of the red giant branch (TRGB), and surface brightness fluctuations (SBF) are an ⋆ The observations were obtained with the Australia Telescope which is funded by the Commonwealth of Australia for operations as a National Facility managed by CSIRO. essential ingredient, together with accurate velocities and detailed multi-wavelength studies of each LV galaxy, for the assembly of a dynamic 3D view of the Local Universe. This, in turn, leads to a better understanding of the local flow field, the local mass density and the local star-formation density. Interferometric H i measurements, in particular, provide insight into the overall matter distribution (baryonic and non-baryonic) in the Local Volume. The galaxy pair NGC 1512/1510 is located in the outskirts of the Local Volume and its study forms part of the 'Local Volume H i Survey' (LVHIS; Koribalski et al. 2008). Since no TRGB distance is currently available for NGC 1512, we use its Local Group velocity, vLG = Malin (priv. com.) from combined UK Schmidt Telescope plates; it has been saturated to emphasise the faintest stellar structures of the system, in particular the prominent eastern arm and the bridge between NGC 1512 and NGC 1510. The grey scale is logarithmic; the displayed field of view is 27 ′ × 25 ′ . A non-saturated R-band image of the pair, obtained as part of the SINGS project (Kennicutt et al. 2003), is overlaid onto the central region. (Right) Sketch of the stellar and H i structure of the NGC 1512/1510 system; see text for details. 712 km s −1 , to compute a Hubble distance of ∼9.5 Mpc. -LVHIS is a large project † that aims to provide detailed H i distributions, velocity fields and star formation rates for a complete sample of nearby, gas-rich galaxies. With the Australia Telescope Compact Array (ATCA), we observed all LV galaxies that were detected in the H i Parkes All-Sky Survey (HIPASS; Barnes et al. 2001, Koribalski et al. 2004 and reside south of approx. -30 • declination. The closest neighbours to the NGC 1512/1510 system are (1) the edge-on spiral galaxy NGC 1495 (HIPASS J0358-44), (2) the galaxy pair NGC 1487 (HIPASS J0355-42) and (3) the galaxy ESO249-G026 (HIPASS J0354-43), all located at projected distances of more than 1. • 5. Within 3 • (∼0.5 Mpc) we find 15 neighbours, suggesting that the NGC 1512/1510 system is part of a loose (spiral) galaxy group (LGG 108; Garcia 1993). The barred galaxy NGC 1512 and the blue compact dwarf (BCD) galaxy NGC 1510 are an interacting galaxy pair, separated by only ∼5 ′ (13.8 kpc). At the adopted distance of 9.5 Mpc, 1 ′ corresponds to 2.76 kpc. Table 1 gives some basic properties of both galaxies. The optical appearances of both galaxies are well described by Hawarden et al. (1979). NGC 1512 (IRAS 04022-4329) is a large, strongly barred galaxy with two prominent star-forming rings. Its morphological type is generally given as SB(r)a or SB(r)b. The companion, NGC 1510 (IRAS 04019-4332), is a much smaller, peculiar S0 or lenticular galaxy. Their respective optical diameters are 8. ′ 9 × 5. ′ 6 and 1. ′ 3 × 0. ′ 7, i.e. NGC 1512's stellar disk is about seven times larger than that of NGC 1510. Beautiful multi-color HST images of NGC 1512 by † LVHIS project: www.atnf.csiro.au/research/LVHIS Maoz et al. (2001) clearly show the structure of the nuclear region (<20 ′′ ∼ 1 kpc): a bright nucleus surrounded by a smooth, dusty disk which is enveloped by a highly ordered and narrow starburst ring of diameter 16 ′′ × 12 ′′ with a position angle (P A) of ∼90 • . This nuclear ring is also evident in the J − K map by Laurikainen et al. (2006) and in the Spitzer mid-infrared images obtained as part of the SINGS project (Kennicutt et al. 2003). The dust lanes hint at a tight inner spiral structure within the nuclear disk. Fabry-Perot Hα observations of NGC 1512 by Buta (1988) show that the nuclear ring has a rotational velocity of vrot ∼ 200-220 km s −1 (assuming an inclination angle of 35 • ). Beyond the nuclear ring, which lies within the bulge ( < ∼ 1 ′ = 2.8 kpc) at the centre of the bar, vrot appears to be roughly constant. Hα images of the inner region (<4 ′ ≈ 11 kpc) of NGC 1512 (SINGG project; Meurer et al. 2006) reveal a second star-forming ring of approximate diameter 3 ′ × 2 ′ at P A ∼ 45 • , ie. about ten times larger than the nuclear starburst ring; its width is 20 ′′ −40 ′′ . This inner ring is composed of dozens of independent H ii regions with typical sizes of 2 ′′ − 5 ′′ . The bar, which has a length of ∼3 ′ (8.3 kpc), lies roughly along its major axis. Some enhancement of the star formation is seen at both ends of the bar where the spiral arms commence. The optical data presented by Kinman (1978) and Hawarden et al. (1979) revealed, for the first time, signs of tidal interaction between NGC 1510 and NGC 1512. Sandage & Bedke (1994) describe NGC 1512 as an almostnormal SBb(r) where interaction with NGC 1510 distorts the outer thin arm pattern. The stellar spiral arms are most prominent in deep optical images (see Fig. 1) as well as the GALEX ultraviolet (U V ) images by Gil de Paz et al. (2007a), all of which give a stunning view of the star-forming regions in NGC 1512's outer disk. A sketch identifying important stellar and H i features of the interacting system NGC 1512/1510 is provided on the right side of Fig. 1. NGC 1510 is a low metallicity (Z ∼ 0.2 Z⊙) BCD galaxy (see also Section 4.6). Hawarden et al. (1979) suggested that its emission line spectrum and blue colors are the consequence of star formation activity in the material -basically H i gas -recently (∼300 Myr) accreted from NGC 1512, mimicing the properties of a red amorphous dwarf elliptical galaxy. This hypothesis is also supported by Eichendorf & Nieto (1984), who identified several low-metallicity starforming regions in NGC 1510. One of them (the SW component) reveals a broad λ4686 He ii line which is attributed to the presence of an important population of Wolf-Rayet (WR) stars in the burst. NGC 1510 is therefore classified as Wolf-Rayet galaxy (Conti 1991;Schaerer, Contini & Pindao 1999). Hawarden et al. (1979) also present remarkable H i data for the galaxy pair. Their 24-pointing H i map obtained with the 64-m Parkes telescope reveals a large neutral hydrogen envelope around NGC 1512, encompassing its neighbour, NGC 1510. Koribalski et al. (2004) measure an integrated H i flux density of FHI = 259 ± 17 Jy km s −1 for the galaxy pair, named HIPASS J0403-43 in the HIPASS Bright Galaxy Catalog (see Table 1). The detected H i emission is centered on NGC 1512 and significantly extended with respect to the Parkes gridded beam of 15. ′ 5. Hawarden et al. (1979) measured FHI = 232 ± 20 Jy km s −1 (same as Reif et al. 1982), slightly lower than the HIPASS value. Here we present high-resolution ATCA H i line and 20cm radio continuum data of the galaxy pair NGC 1512/1510 as well as complimentary GALEX U V -, SINGG Hα-and Spitzer mid-infrared images. The paper is organised as follows: in Section 2 we summarise the observations and data reduction; in Section 3 we present the H i line and the 20cm radio continuum results, including our discovery of two tidal dwarf galaxy candidates. The discussion in Section 4 exploits the available multi-wavelength data sets, comparing the H i gas density with the properties of star-forming regions out to radii of 80 kpc. Section 5 contains our conclusions and Section 6 a brief outlook towards H i surveys with the Australian SKA Pathfinder (ASKAP). OBSERVATIONS AND DATA REDUCTION H i line and 20-cm radio continuum observations of the galaxy pair NGC 1512/1510 were obtained with the Australia Telescope Compact Array (ATCA) using multiple configurations and four (overlapping) pointings. The observing details are given in Table 2. The first frequency band (IF1) was centered on 1415 MHz with a bandwidth of 8 MHz, divided into 512 channels. This gives a channel width of 3.3 km s −1 and a velocity resolution of 4 km s −1 . The ATCA primary beam is 33. ′ 6 at 1415 MHz. The second frequency band (IF2) was centered on 1384 MHz (20-cm) with a bandwidth of 128 MHz divided into 32 channels. The ATCA is a radio interferometer consisting of six 22-m dishes, creating 15 baselines in a single configuration, equipped with seven receiver systems covering wavelengths from 3-mm to 20-cm. While five antennas (CA01 to CA05) are movable along a 3-km long east-west track (and a 214-m long north-south spur, allowing us to create hybrid arrays), Figure 2. H i moment maps of the galaxy pair NGC 1512/1510 as obtained from the ATCA using 'natural' weighting. Note that the displayed image size is ∼ 60 ′ × 50 ′ , showing about four times the area of Fig. 1. (top) H i distribution (contour levels: 0.1, 0.5, 1, 1.5, 2, 2.5, 3 and 3.5 Jy beam −1 km s −1 ), and (bottom) mean, masked H i velocity field (contour levels range from 785 to 1025 km s −1 , in steps of 15 km s −1 ). The synthesised beam (88. ′′ 3 × 75. ′′ 5) is displayed in the bottom left corner of each panel. Assuming the gas fills the beam, 0.1 Jy km s −1 corresponds to an H i column density of 1.7 × 10 19 atoms cm −2 . The ellipse (center) and the circle (∼ 5 ′ towards the SW) mark the position and size of the 20-cm radio continuum emission from NGC 1512 and NGC 1510, respectively. -Note that the H i emission of the galaxy NGC 1512 extends well beyond the known stellar disk/arms. one antenna (CA06) is fixed at a distance of 3-km from the end of the track. By combining data from several array configuration (see Table 2) we achieve excellent uv-coverage generated by over 100 baselines ranging from 30-m to 6-km. Using Fourier transformation, this allows us to make data cubes and images at a large range of angular resolutions (up to 6 ′′ at 20-cm) by choosing different weights for short, medium and long baselines which in turn are sensitive to different structure scales. The weighting of the data affects not only the resolution, but also the rms noise and sensitivity to diffuse emission. Data reduction was carried out with the miriad software package (Sault, Teuben & Wright 1995) using standard procedures. After calibration the IF1 data were split into a narrow band 20-cm radio continuum and an H i line data set using a first order fit to the line-free channels. H i cubes were made using 'natural' (na) and 'robust' (r=0) weighting of the uv-data in the velocity range covered by the H i emission using steps of 10 km s −1 . The longest baselines to the distant antenna six (CA06) were excluded when making the low-resolution cubes. Broad-band 20-cm radio continuum images were made using 'robust' (r=0) and 'uniform' weighting of the IF2 uv-data. The data were analysed using miriad, apart from the rotation curve fit which was obtained using the gipsy software package (van der Hulst et al. 1992). RESULTS The NGC 1512/1510 galaxy pair is an impressive system. Our ATCA H i mosaic (see Figs. 2-4) shows a very extended gas distribution, spanning a diameter of ∼40 ′ (or 110 kpc). Two prominent spiral arms, which appear to wrap around ∼1.5 times, are among the most remarkable H i features. The brightness and width of both H i arms varies with radius: most notably, in the south, Arm 1 splits into three branches, followed by a broad region of H i debris towards the west before continuing on as a single feature towards the north. These disturbances in the outer disk of NGC 1512 are likely caused by tidal interaction with and accretion of the dwarf companion, NGC 1510. Individual H i clouds belonging to the NGC 1512/1510 system are found out to projected radii of 30 ′ (∼83 kpc). The velocity gradient detected within the extended clumps agrees with that of the neighbouring spiral arms, suggesting that they are condensations within the outermost parts of the disk. Their H i properties and evidence for optical and U V counterparts are discussed in Section 3.3. In regions of high H i column density, mostly within the arms and the bridge, star formation is most prominent. We will analyse the relation between the star formation rate and the H i column density in Section 4.3. H i in NGC 1512 The H i emission from the galaxy NGC 1512 covers a velocity range from about 750 to 1070 km s −1 . We measure an integrated H i flux density of FHI = 268 Jy km s −1 , which agrees very well with the HIPASS FHI reported by Koribalski et al. (2004; see Table 1). This agreement indicates that very little diffuse H i emission has been filtered out by the interferometric observation. Adopting a distance of 9.5 Mpc, the H i flux density corresponds to an H i mass of 5.7 × 10 9 M⊙. The majority of the detected neutral gas clearly belongs to NGC 1512; this is evident from the center and symmetry of the gas distribution and the gas kinematics. The H i extent of NGC 1512 is at least a factor four larger than its optical B25 size. The H i mass to blue luminosity ratio is ∼1. See Table 3 for a summary of the galaxy properties as determined from the ATCA H i data. The interferometric H i data allow us to determine the gas dynamics of the system. In Fig. 4 we display the H i channel maps (smoothed to 20 km s −1 resolution) which show a relatively regular rotating inner disk of NGC 1512 and a more disturbed outer disk. Near the systemic velocity the change in the kinematics at a radius of ∼5 ′ appears particularly abrupt. The extent and kinematics of the (spiral) arm curving towards the east are spectacular, nearly matched by a broader, less well-defined arm curving towards the west. Another view of the galaxy kinematics is presented in Fig. 5 in form of a major axis position-velocity (pv) diagram. This was obtained by summing the central 20 ′ along P A = 90 • us-ing a 3σ cutoff to avoid adding excessive noise to the H i signal. It shows the line-of-sight rotation velocity of NGC 1512 as a function of radius. The observed decrease of the H i velocities beyond a radius of 8-10 ′ , compared to the inner disk, is most likely caused by an increase in the inclination of the H i disk. This warping of the outer spiral/tidal armswhich is quite common in spiral galaxies -could be related to or potentially caused by the interaction with NGC 1510. We used the gipsy program rotcur (Begeman 1987) to fit the H i rotation curve of the galaxy NGC 1512. As a first step, we tried to obtain its centre position and systemic velocity, vsys, using five rings within the inner velocity field (r < 7 ′ ). As the results did not converge, we proceeded with the GALEX peak position for NGC 1512, α, δ(J2000) = 04 h 03 m 54. s 2, -43 • 20 ′ 56. ′′ 5. With this centre position held fixed we find vsys = 900 ± 5 km s −1 . We might expect the kinematic centre position of the NGC 1512/1510 system to shift with radius from the core of NGC 1512 towards its interaction partner, NGC 1510 (vopt = 989 km s −1 ), now located ∼5 ′ to the southwest. Furthermore, we find the position angle, P A, of NGC 1512 appears reasonably constant around 262 • ± 1 • , while the inclination angle, i, varies significantly. With the centre position, vsys, and P A set to the values given above, we find the inclination angle to increase from ∼30 • (r < 8 ′ ) to 46 • (r > 10 ′ ). The latter is consistent with the apparent change in the ellipticity (i.e. increasing major to minor axis ratio) of NGC 1512's gas distribution with radius. The resulting rotation curve, vrot(r), is shown in Fig. 6. The maximum rotational velocities of vrot ≈ 225 km s −1 are reached at radii between ∼300 ′′ and ∼500 ′′ . Beyond that vrot rapidly decreases, reaching ∼110 km s −1 at r = 1200 ′′ (55 kpc). The residual velocity field (see Fig. 7) shows deviations up to approximately ±30 km s −1 , most notably near the position of NGC 1510 and in the outer spiral/tidal arms. The inner disk also shows deviations along an eastern arc (similar to a one-armed spiral) which roughly agrees with the elongated star-forming spiral arm of NGC 1512 seen in the GALEX U V images. The passage of the companion would have unsettled the mass distribution, possibly causing Figure 6, masked with the outer contour of NGC 1512's H i distribution. The contour levels range from 785 to 1025 km s −1 in steps of 15 km s −1 . (Right) Residual velocity field of the interacting galaxy pair NGC 1512/1510. As in Figs. 2-4, the ellipse (center) and the small circle (∼ 5 ′ towards the SW) mark the position and size of the 20-cm radio continuum emission from NGC 1512 and NGC 1510, respectively. In addition, we overlaid the GALEX FUV contours onto the residual H i velocity field displayed on the right side. a density wave or one-armed spiral (as seen in the residual velocity field). We estimate a dynamical mass of about 3 × 10 11 M⊙ for NGC 1512, based on a galaxy radius of r = 55 kpc and a rotational velocity of vrot = 150 km s −1 . If the outer H i clouds at r = 83 kpc are bound to NGC 1512, the dynamical mass increases to 4.3 × 10 11 M⊙. We note that the rotation velocity of NGC 1512 is comparable or higher (200 -220 km s −1 ) in the nuclear ring than in the inner disk, and significantly higher than in the outer H i envelope. The MHI to M dyn ratio indicates that < ∼ 2% of the mass of NGC 1512 is in the form of H i gas. No estimate of the molecular gas mass in NGC 1512 or NGC 1510 is currently available. H i in NGC 1510 NGC 1510 lies at a projected distance of ∼5 ′ (13.8 kpc) from the centre of NGC 1512. This places it well inside NGC 1512's H i disk, which shows an enhancement of the H i column density at the position of NGC 1510. The offset in the residual velocity field of NGC 1512 (see Fig. 7) also suggests that NGC 1510 contains a small amount of H i gas and/or left the signature of its interaction with the inner disk of NGC 1512. Gallagher et al. (2005) speculate that NGC 1510 may have captured gas from NGC 1512, contributing to its enhanced SF activity. Assuming the H i distribution of NGC 1510 is unresolved in the H i maps shown here, we measure an H i flux density of FHI ≈ 2 Jy km s −1 , corresponding to an H i mass of MHI ≈ 4 × 10 7 M⊙. This estimate is very uncertain and does not account for any gas NGC 1510 may have lost during the interaction, and is likely to include some H i from the disk of NGC 1512. The estimated H i mass (if correct) is less than 1% of NGC 1512's Figure 8. Multi-wavelength images and ATCA H i spectra of the two tidal dwarf galaxy candidates in the NGC 1512/1510 system; for details see Section 3.3. Black contours (-5, 5, 8, 12, 16, 20, 25, 30, 35 and 40 mJy beam −1 ) show the ATCA H i emission at 950 km s −1 for N1512-south (top) and at 960 km s −1 for N1512-west (bottom), while the greyscale depicts the stellar populations as seen in the GALEX N U V image (left) and the respective optical images (right). Malin's deep optical image is used for N1512-south and a DSS2 image for N1512-west. H i mass. The H i mass to blue luminosity ratio of NGC 1510 would be ∼0.07, within the range (0.02 -0.8) observed for BCD galaxies (the average ratio is ∼0.3; Huchtmeier et al. 2005Huchtmeier et al. , 2007. At the position of NGC 1510, the H i emission ranges from ∼970 to 1060 km s −1 . Note that the optical systemic velocity of NGC 1510 is 990 ± 23 km s −1 (de Vaucouleurs et al. 1991, Lindblad & Jörsäter 1981, about 100 km s −1 higher than that of NGC 1512. Tidal Dwarf Galaxy Candidates Individual H i clouds belonging to the NGC 1512/1510 system are found out to projected radii of ∼30 ′ (83 kpc). Their velocities agree with the general rotation of the disk gas (see Figs. 2 & 4), suggesting that these clumps are condensations within the outermost parts of the disk. It is likely that they were or are embedded in a very low surface brightness disk that remains undetected in our observations. In the following we study the three most isolated H i clouds which lie roughly along an extension of the easternmost H i arm. The first cloud, at α, δ(J2000) = 04 h 04 m 20. s 4, -43 • 34 ′ 16. ′′ 7 (∼875 km s −1 ), is located only 14. ′ 2 (39 kpc) from the center of NGC 1512. We measure an approximate H i flux density of 0.56 Jy km s −1 . No stellar counterpart is detected. The second cloud, which is the brightest and most extended of the three H i cloud, is located at α, δ(J2000) = 04 h 02 m 37 s , -43 • 39 ′ 32 ′′ (H i peak position), 23. ′ 3 (64 kpc) from the center of NGC 1512. It has an H i peak flux of ∼1 Jy beam −1 and a total H i flux density ∼3 Jy km s −1 (MHI = 6 × 10 7 M⊙). There is a clear velocity gradient along the cloud (940 -970 km s −1 ) which agrees with the general rotation pattern of NGC 1512; its centre velocity is ∼950 km s −1 . The deep optical image (see Fig. 8, top right) reveals the faint optical counterpart ‡ : a diffuse knot coinciding with the H i maximum, barely visible in the second-generation Digitised Sky Survey (DSS2). In addition, we find clear evidence of star formation in the GALEX F U V and N U V images (see also Section 4.2). Fig. 8 (top) shows the locations of the optical and ultraviolet emission with respect to the H i emission. We suggest that the core of this H i cloud (with an H i mass of ∼ 2 × 10 7 km s −1 ) is a tidal dwarf galaxy (TDG) and refer to it as N1512-south. The third H i cloud is located at α, δ(J2000) = 04 h 01 m 19 s , -43 • 20 ′ 03 ′′ (±10 ′′ ), 28. ′ 2 (78 kpc) from the center of NGC 1512. -Note that this puts it near the edge of the field well-mapped by our four overlapping ATCA pointings, and H i sensitivity is reduced; primary beam correction is tapered to avoid excessive noise. -We suggest ‡ The deep optical image of the NGC 1512/1510 system, obtained by David Malin, is partially shown in our Fig. 1 and is, in its full size (∼ 40 ′ × 50 ′ ), available at www.aao.gov.au/images/deep html/n1510 d.html . that this compact H i cloud is a second, more evolved tidal dwarf galaxy in the NGC 1512/1510 system and refer to it as N1512-west. It has an H i flux density of at least ∼0.3 Jy km s −1 (MHI > ∼ 0.6 × 10 7 M⊙) and a centre velocity of ∼960 km s −1 . Unfortunately, Malin's deep optical image does not extend this far west of NGC 1512. Nevertheless, clear evidence of star formation is again found in the GALEX F U V and N U V images (see Fig. 8, bottom). After careful calibration, we estimate F U V − N U V colors of ∼0.35-0.43 mag for N1512-south (there are two distinct star forming knots within this TDG) and ∼0.45 mag for N1512-west (see also Section 4.2). Despite the large uncertainties (∼0.2 mag) in the color estimates in these areas (which are outside the published images by Gil de Paz et al. 2007a), we find that the derived average ages of the detected stellar populations within the TDGs are no younger than 150 Myr and possibly as old as ∼300 Myr. N1512-west appears to be slightly older (more evolved) than N1512-south which would be expected given its compactness and location at the outermost tip of the extrapolated eastern arm. Following Braine et al. (2004) we can estimate the expected molecular gas mass (as traced by CO emission) of TDGs to be less than 30% of the H i mass, i.e. < ∼ 2 − 3 × 10 6 M⊙ (for N1512-west). Assuming a velocity width of 10 km s −1 (approx. half the width found in H i), a CO peak flux of 30 mK beam −1 would be expected with a typical single dish telescope. If the star-forming H i clumps in the outskirts of the NGC 1512/1510 are indeed TDGs, we would expect to detect Hα emission from stars recently formed in the tidal debris. This young stellar population must exist in addition to the (on average) older population inferred from the GALEX F U V − N U V colors. Figure 9 shows the 20-cm radio continuum emission towards the galaxy pair NGC 1512/1510 and its surroundings. Both galaxies are clearly detected. The field contains a large number of unresolved radio sources as well as a few head-tail and wide-angle tail radio galaxies (incl. PMN J040150.3-425911). The barred spiral galaxy NGC 1512 shows extended continuum emission (∼ 5 ′ × 3 ′ ) and a bright core while the much smaller BCD galaxy NGC 1510 appears unresolved at 30 ′′ resolution. The 'fish' shaped radio continuum emission is due to enhanced star-formation in the region between the two galaxies which corresponds to the tail of the fish. NGC 1512's inner star-forming ring which is well defined in the optical-, Hα-and U V -images is embedded within the radio continuum emission (see Fig. 10). At high resolution, our surface brightness sensitivity is insufficient to fully map the structure of the inner ring, but NGC 1512's nuclear ring is clearly detected as well as the extended disk of emission from NGC 1510. 20-cm Radio Continuum Emission To estimate the star formation rate (SFR) of a galaxy from our 20-cm data we use two approaches: (1) the formation rate of recent, high mass stars (M > 5 M⊙) is calculated using SFR [M⊙ yr −1 ] = 0.03 D 2 S20cm (Condon et al. 2002), where D is the distance in Mpc and S20cm the 20-cm radio continuum flux density in Jy. We measure S20cm = 38.8 mJy for NGC 1512 and 4.5 mJy for NGC 1510, resulting in 0.11 M⊙ yr −1 and 0.012 M⊙ yr −1 , respectively. (2) In order to derive the formation rate of all stars (M > 0.1 M⊙) we multiply by 4.76 (see Condon et al. 2002), resulting in 0.50 M⊙ yr −1 (NGC 1512) and 0.06 M⊙ yr −1 (NGC 1510). DISCUSSION The H i diameter of the galaxy NGC 1512 is large (∼40 ′ or 110 kpc), but similar in size to the largest H i disks found in the Local Volume. For example, ATCA H i mosaics of the spiral galaxies M 83 (Koribalski et al. 2009) and Circinus (Curran et al. 2008) reveal diameters of ∼70 ′ (80 kpc) and ∼65 ′ (80 kpc), respectively. Even larger H i diameters have been found for some of the most H i-massive galaxies currently known, e.g. Malin 1 (Pickering et al. 1997), NGC 6872 (Horellou & Koribalski 2007), and HIZOA J0836-43 (Donley et al. 2006;Cluver et al. 2008). Possibly the deepest singledish H i map recently obtained with the Arecibo multibeam system for the nearby spiral galaxy NGC 2903 also shows a very large H i envelope and a neighbouring H i cloud (Irwin et al. 2009). A multi-wavelength study of the grand-design spiral galaxy M 83 (HIPASS J1337-29) and its neighbours -sim- Figure 11. Multi-wavelength color-composite image of the galaxy pair NGC 1512/1510 obtained using the DSS R-band image (red), the ATCA H i distribution (green) and the GALEX N U V -band image (blue). The Spitzer 24µm image was overlaid just in the center of the two galaxies. We note that in the outer disk the U V emission traces the regions of highest H i column density. ilar to the one presented here -is under way; first Parkes and ATCA H i results were presented by Koribalski (2005 and are shown on the LVHIS webpages. The H i gas dynamics of the Circinus galaxy (HIPASS J1413-65) were studied by Jones et al. (1999), using high resolution data, and more recently by Curran et al. (2008), using a lowresolution mosaic. Circinus appears to be rather isolated and is difficult to study in the optical due to its location behind the Galactic Plane. The 2X-H i vs X-UV disk of NGC 1512 A multi-wavelength colorcomposite image of the NGC 1512/1510 system is shown in Fig. 11. The combination of the large-scale H i distribution with deep optical and U V emission maps is an excellent way to highlight the locations of star formation within the gaseous disk. NGC 1512's H i envelope is four times larger than its B25 optical size (see Table 1) and about twice as large as the stellar extent measured from Malin's deep optical image and from the GALEX U V images. Figure 12 gives a multi-wavelength view of the NGC 1512/1510 system, shown with high resolution (15 ′′ = 700 pc) over the main star-forming disk (similar in size to Fig. 1). Our main purpose here is to emphasize how the observed extent and distribution of stars and gas depend on the tracer. The H i distribution is by far the largest and extends well beyond the area shown here. The GALEX F U V and N U V images, shown here smoothed to an angular resolution of 15 ′′ , trace star forming regions out to a radius . The greyscale has been adjusted such that faint features in the outer disk of the system are emphasised, while the inner region is over-exposed. The white contours, in some panels, help to trace the emission in the over-exposed areas. of ∼10 ′ . Malin's deep optical image (see Fig. 1) shows a very similar distribution. We expect that a deep Hα mosaic would also match this, as hinted at by the faint chains of H ii regions seen in the rather limited SINGG Hα image. The Spitzer 8µm image allows us to see the inner spiral arms as they connect to the bar, but detects no emission in the outer disk. Most obviously missing is a map of the molecular gas in the system (e.g., as traced by CO(1-0) emission) which is expected to be similar to the Hα image. The large gas reservoir provides copious fuel for star formation, which should be most prominent in areas of high column density (see Section 4.4). Given a high-sensitivity, high-resolution H i distribution, we can pinpoint the locations of star forming activity in the outer disks of galaxies. The correspondence between regions of high H i column density and bright U V emission (see Fig. 12) is excellent throughout the extended disk of NGC 1512, apart from the central area which shows an H i depression (but must be rich in molecular gas). The large majority of the observed U V -complexes lie in regions where the H i column density is above 2 × 10 21 atoms cm −2 (as measured in the highresolution H i map). For comparison, the dwarf irregular galaxy ESO215-G?009 has a very extended H i disk (Warren et al. 2004) but no signs of significant star formation in the outer disk; its H i column density reaches above 10 21 atoms cm −2 in only a few locations. Deep GALEX images of nearby galaxies show that the U V profiles of many spiral galaxies extend beyond their Hαor B25 optical radius (Thilker et al. 2005;Gil de Paz et al. 2005, 2007b. In fact, Zaritsky & Christlein (2007) suggest that XU V -disks exist in ∼30% of the local spiral galaxy population. We contend that these spectacular XU V -disks must be located within even larger H i envelopes, here called 2X-H i disks, which provide the fuel for continued star formation. Ultimately, it may just be a question of sensitivity that limits our observations of the outer edges of stellar and gaseous disks. We note that Irwin et al. (2009) detect H i gas out to column densities of 3 × 10 17 atoms cm −2 (assuming the gas fills the 270 ′′ beam). Stellar cluster ages We use the GALEX F U V and N U V images to estimate the ages of the U V -rich star clusters in the NGC 1512/1510 system. This is done by integrating the counts per second (CPS) in ∼200 selected regions (>100 arcsec 2 , average size = 320 arcsec 2 ) using the same polygon for both images and applying m λ = −2.5 log(CP S) + a λ (Morrissey et al. 2005), where aF U V = 18.82 mag and aNUV = 20.08 mag (all magnitudes are expressed in the AB system). We did not correct for extinction which is negligible when computing the Figure 13 shows the spatial distribution and color of the analysed star clusters. We use different symbols to identify five distinct areas within the system: the ring, the internal arm, the bridge to NGC 1510, the western debris and Arm 1 (see Fig. 1). The F U V − N U V colors (blue to red) range from -0.06 (youngest stellar population) at the southern end of the bridge to +0.68 (oldest stellar population) for the farthest cluster in the NW region. Uncertainties in the color estimates strongly depend on the brightness of the star clusters (∼0.06 for the brightest and ∼0.50 for the weakest objects). For the analysed clusters in the NGC 1512/1510 system we adopt an uncertainty of ±0.20. As extinction is negligible when computing the F U V − N U V colors (see above), higher values correspond to older ages for the last star-forming burst hosted by the U V -rich clusters. We have used the same procedure as described in Bianchi et al. (2005) and Hibbard et al. (2005) to estimate the age of the last star-forming event, assuming an instantaneous burst, and evolutionary synthesis models provided by Bruzual & Charlot (2003). Table 4 lists the results obtained for distinct areas. While the U V colors suggest that the average stellar population in the core of NGC 1512 (red circle) is about twice as old as that of NGC 1510 (green circle), the high Hα emission in both galaxies also indicates significant recent star formation. We conclude that NGC 1512 and NGC 1510 contain both a young stellar population and an older, more evolved stellar population. As shown in Fig. 13, there are definite color gradients along the spiral arms and other regions within the NGC 1512/1510 system. For example, while regions within the inner star-forming ring of NGC 1512 generally have similar colors, (F U V −N U V )ring = 0.28±0.06 (age ∼180 Myr), slightly younger ages are found towards both ends of the bar, i.e. at the start of the inner arms. Regions located within the bridge between NGC 1512 and NGC 1510 are -on average -even younger, (F U V − N U V ) bridge = 0.21 ± 0.12 (age ∼120 Myr), with ages of ∼10 Myr (the youngest regions in the whole system) near NGC 1512, and ∼270 Myr near NGC 1510. As shown in Fig. 12, U V -bright regions close to NGC 1512 coincide well with the H i column density maxima; their young derived age is consistent with the Hα emission found in these knots. A U V -color gradient is also observed along the prominent eastern arm (Arm 1). In the eastern-most regions, which also show some Hα emission, we measure F U V − N U V colors around -0.05 (age ∼10 Myr). As the arm curves towards the south, the U V -rich clusters appear to get older, reaching F U V − N U V ∼ 0.66 (age ∼380 Myr) within the two streams of the outermost regions. Within the debris in the NW area, stellar clusters near NGC 1510 tend to have younger ages (F U V − N U V ∼ 0.2 − 0.3, ages of 100-200 Myr) than those located towards the north of NGC 1512 (F U V − N U V ∼ 0.5 − 0.7, ages of 300-400 Myr). The broadening of the H i spiral arm at the position of the NW debris suggests that something has dispersed both the neutral gas and the stellar component in this region. The age gradient found in the stellar clusters indicates that it probably is due to the gravitational interaction with the BCD galaxy NGC 1510. The overall gas distribution together with the star formation history of the system provides some hints as to the gravitational interaction between the large spiral NGC 1512 and the BCD galaxy NGC 1510 and its effects on the surrounding medium. The youngest star forming regions are found mostly to the east & west of NGC 1512, while regions towards the north & south of NGC 1512 are generally older. This east-west (young) versus north-south (old) symmetry might indicate the passage of NGC 1510 as it is accreted by NGC 1512. This interaction might have (1) triggered the bar in NGC 1512 (unless this was the result of previous interactions or minor mergers) causing gas within the co-rotation radius to flow towards the nuclear region, thus providing fuel for continuous star formation, (2) affected the spiral arm pattern causing broadening and splitting as well as enhanced star formation, and (3) led to the ejection of material to large radii where it may become unbound, forming dense clumps able to form new stars. Evidence of the latter is the observation of two tidal dwarf galaxies at the outermost regions of the NGC 1512/1510 system. Figure 14. Global star formation rates (SFR) of the galaxies NGC 1512 (blue circles) and NGC 1510 (red triangles) as derived from measurements at various wavelengths. All SFR estimates are listed in Table 5 and described in the text. The SFR tracers are arranged along the x-axis roughly by the average age of the contributing star burst population (∼10 Myr for Hα; ∼100 Myr for 20-cm radio continuum emission, see Section 4.3). Two SF R 20cm estimates are given: (1) for M > 5 M ⊙ (recent SF) and (2) for M > 0.1 M ⊙ (overall SF, labeled M⋆). The global star formation rate There are numerous ways to estimate the star formation rate (SFR) of a galaxy. To study the global and local SFRs, we use a range of line and continuum measurements at different wavelengths (ultraviolet, optical, infrared, and radio). A combination of these data together with an understanding of which stellar populations are detected at each wavelength is essential to obtain the full picture. Nevertheless, we are somewhat limited by the sensitivity, quality and field-of-view of the existing observations. Star formation tends to be localised and varies within galaxies. While the nuclear region and inner spiral arms of a galaxy are generally locations of significant star formation, we also find new stars forming in other areas such as interaction zones and occasionally in isolated clumps (presumably of high molecular gas density) in the far outskirts of galaxies. The NGC 1512/1510 system is an excellent laboratory to study the locations and properties of its many star forming regions, from the galaxy nuclei out to the largest radii where detached H i clouds are found (see Section 3.3) as well as in the interaction zone between the two galaxies. Here we use a range of tracers to study the global SFR of both NGC 1512 and NGC 1510 (results are summarised in Table 5 and Fig. 14), before investigating the local star formation activity within various parts of the NGC 1512/1510 system (see Section 4.4). From our 20-cm radio continuum data we derive a recent global SFR of 0.105 M⊙ yr −1 for NGC 1512 and 0.012 M⊙ yr −1 for NGC 1510 (see Section 3.4). Another extinction-free SFR estimate is derived from the far-infrared (F IR) luminosity. Using the IRAS flux densities (Moshir et al. 1990) together with the relations given by Sanders & Mirabel (1996) and Kennicutt (1998), we derive SF RFIR ≈ 0.12 M⊙ yr −1 for NGC 1512 and 0.02 M⊙ yr −1 for NGC 1510. F IR emission comes from the thermal continuum reradiation of dust grains which absorb the visible and U V radiation emitted by massive young stars. In contrast, ra-dio continuum emission is mainly due to synchrotron radiation from relativistic electrons accelerated in the remnants of core-collapse supernovae, therefore also associated with the presence of massive stars. Both estimates trace the star formation activity in the last ∼100 Myr. However, as relativistic electrons have lifetimes of ∼100 Myr (Condon et al. 2002), we should expect that the 20-cm radio continuum emission traces SFRs with somewhat extended ages. Hα emission traces the most massive, ionising stars, and timescales of ∼10 Myr, i.e. the most recent events of star formation in the galaxy. The Hα flux given by Meurer et al. (2006) was corrected for Galactic extinction but not for internal extinction or for the contribution of the [N ii] emission lines adjacent to Hα (see López-Sánchez & Esteban 2008) § . Using the relation by Kennicutt (1998), we find SF RHα = 0.19 and 0.07 M⊙ yr −1 for NGC 1512 and NGC 1510, respectively. Slighter lower values, SF RHα = 0.13 and 0.05 M⊙ yr −1 , result when using the more recent Calzetti et al. (2007) calibration. U V -emission probes star formation over timescales of ∼100 Myr, the life-time of the massive OB stars. Using the extinction-corrected GALEX U V -magnitude, mFUV, as given by Gil de Paz et al. (2007a), we derive the U Vflux as follows: fFUV [erg s −1 cm −2Å−1 ] = 1.40 × 10 −15 × 10 0.4×(18.82−m FUV ) . We have corrected mFUV for extinction assuming the Galactic value provided by Schlegel et al. (1998), E(B − V ) = 0.011, and AFUV = 7.9 E(B − V ). Applying the Salim et al. (2007) relation between the F U V luminosity and the SFR, we obtain SF RFUV = 0.12 and 0.04 M⊙ yr −1 for NGC 1512 and NGC 1510, respectively. For comparison, applying the Kennicutt (1998) relation results in values that are a 1.3 times higher. Here we prefer to use Salim et al. (2007) relation because it was derived using GALEX data. The SINGS Legacy project (Kennicutt et al. 2003) provides Spitzer mid-infrared (M IR) images of NGC 1512/1510. M IR emission, which traces the dust distribution within galaxies, also agrees well with the position of the U V -rich star clusters in the system. Because of its higher intrinsic brightness, the M IR emission is mainly detected in the cores of both galaxies and in the inner ring of NGC 1512. Using the Spitzer 24µm flux density measurements of NGC 1512 (Dale et al. 2007) and NGC 1510 (obtained by us; see Table 5) together with the relations by Calzetti et al. (2007) we derive SF R24µm = 0.075 M⊙ yr −1 for NGC 1512 and 0.041 M⊙ yr −1 for NGC 1510. Combining the 24µm luminosity (which traces the dust-absorbed star formation) with the Hα luminosity (which probes the unobscured star formation) we derive SF RHα+24µm = 0.22 M⊙ yr −1 and 0.07 M⊙ yr −1 for NGC 1512 and NGC 1510, respectively. Figure 14 shows the star formation rates as derived for NGC 1512 and NGC 1510 at various wavelength, arranged along the x-axis by the approximate timescales in which the SFR is considered: from the Hα emission, tracing the very young (∼10 Myr) star formation to the radio continuum emission, tracing the old stellar population § As NGC 1510 is a low-metallicity galaxy, the contribution of the [N ii] emission to the Hα flux is expected to be negligible. (∼100 Myr). We find that for NGC 1510 our derived SFR estimates are in agreement (∼0.05 M⊙ yr −1 ). The fact that the SFR derived from the 20-cm radio continuum flux considering all masses, SF R20cm (M ≥ 0.1 M⊙) = 0.058 M⊙ yr −1 , is close to the SFR found using F U V , Hα and 24µm data reinforces the starburst nature of this BCD galaxy. For NGC 1512, the SFR estimates obtained from Hα, F U V , 24µm, F IR and 20-cm radio continuum (M > 5 M⊙) data agree (∼0.12 M⊙ yr −1 ). However, SF R20cm is about four times higher when we consider all masses. This can be explained by the non-starbursting nature of NGC 1512, which has been forming stars over a long period of time (∼Gyr). Following Helou, Soifer & Rowan-Robinson (1985) we calculate the q parameter which is defined as the logarithmic ratio of the F IR to 20-cm radio flux density. We find q = 2.25 and 2.44 for NGC 1512 and NGC 1510, respectively, consistent with the mean value of 2.3 for normal spiral galaxies (Condon 1992). This result confirms the star-forming nature of both galaxies. Using the Hα flux given by Meurer et al. (2006) and the relation provided by Condon et al. (2002), we derive the thermal flux at 1.4 GHz for NGC 1512 and NGC 1510: 2.7 and 1.0 mJy, respectively. The ratio of the non-thermal to thermal radio emission, log R, is 1.1 and 0.54, respectively. The value derived for NGC 1512 agrees with that of typical star-forming galaxies (log R = 1.3 ± 0.4, Dopita et al. 2002) but is relatively low for NGC 1510. This indicates that the thermal emission from H ii regions in NGC 1510 is more important than the non-thermal emission from supernovae explosions (i.e., the starburst is very recent, and there has not been enough time to convert many massive stars into supernovae). This fact agrees with the detection of WR features in NGC 1510 (Eichendorf & Nieto 1984). The local star formation activity We have estimated the star formation rate of each U Vrich stellar cluster in the NGC 1512/1510 system using the extinction-corrected F U V luminosity and the assumptions given before. In general, we find that regions closer to NGC 1512 display higher star formation activity, in agreement with their young ages (see Fig. 13). Considering only stellar clusters with both Hα and U V emission, we compare SF RHα (C07) with SF RFUV (S07). Fig. 15 shows a good correlation between both estimates for small values of SFR/area (≤0.002 M⊙ yr −1 kpc −2 ), however, SF RFUV is always lower than SF RHα for regions with high SFR/area (i.e. within the inner ring of NGC 1512 and in NGC 1510). Given that the ages of the latter are similar, this must be a consequence of internal extinction within these regions which are denser and possess a larger amount of dust, as seen in Spitzer images, than other areas. Fig. 16 confirms this as SF RHα+24µm is systematically higher than SF RHα. Next, we investigate if the U V -rich clusters within the NGC 1512/1510 system do obey the Schmidt-Kennicutt scaling laws of star formation (Kennicutt 1998). Boissier et al. (2007), for example, find that the stellar and gas radial profiles of galaxies with XU V disks follow such relations. Fig. 17 shows a comparison between SF RFUV/area and the H i mass density. -This analysis can be improved by adding high-resolution ATCA H i data, obtained for the southern THINGS project (Deanne, de Blok, et al.), to improve the sensitivity to small scale structure, Overall, we find that SF RFUV/area increases with MHI/area, i.e. the higher the H i gas density the more stars We also show the radial density of the GALEX FUV emission (dotted line) which is an excellent tracer of star formation activity. Due to the lack of CO data we cannot show the total gas density, which is expected to be close or above the critical density at all radii where star formation is present. are forming. However, this is not true for regions located within the inner star-forming ring of NGC 1512, where there is clearly a lack of H i gas (see Fig. 13). We conclude that in the inner region of NGC 1512 a large amount of molecular gas must be present to boost the overall gas density to the critial value or above. In the following we check if the Toomre Q gravitational stability criterion is satisfied at the locations of the U Vrich clusters. Ideally, we would use the H i velocity field (corrected for inclination) and the velocity dispersion of NGC 1512, to compute the critical gas density, Σcrit = αQ σκ πG (see Kennicutt 1989, Martin & Kennicutt 2001, at every pixel in the disk. Here αQ is a scaling constant, σ is the velocity dispersion, and κ is the epicyclic frequency. The low-resolution H i distribution (0. moment) and the mean H i velocity field (1. moment) of NGC 1512 are shown in Fig. 3. The H i velocity dispersion (2. moment, not shown) varies between 7-22 km s −1 in the spiral/tidal arms of NGC 1512. As a first step, we compare the radially averaged H i gas distribution with the critical density and the radially averaged F U V emission. For a flat rotation curve (i.e., vrot(r) = constant), which is a reasonable assumption for r > ∼ 15 ′′ (see Fig. 6), and a velocity dispersion of 6 km s −1 (as used in previous work), the above equation reduces to Σcrit = 0.6 αQ vrot/r, where r is the radius in kpc. Using the derived H i inclination (i = 35 • ) and position angle (P A = 265 • ), the de-projected H i radial surface density, ΣHI of NGC 1512 is shown in Fig. 18. The critcal density was computed for αQ = 0.7 and vrot = 150 km s −1 . We find that the radially averaged H i gas alone lies just below the computed critical density. At radii less than 10 kpc, an increasing amount of molecular gas is needed to reach critical gas density and feed the star formation in the nuclear region and the inner ring. The radially integrated F U V flux drops below the noise at r = 28 kpc, the likely SF threshold. In the inner region of NGC 1512, the gas motions are strongly affected by the stellar bar, which would affect the critical density estimate. Using NGC 1512's angular velocity, Ω(r) = vrot(r)/r, we can also determine the locations of the inner and outer Lindblad resonances: Ωp = Ω(r) ± κ(r)/2, where Ωp is the bar pattern speed and κ(r) = √ 2 vrot/r. At r ≈ 4 kpc (bar radius) its pattern speed is ∼50 km s −1 kpc −1 , suggesting that the ILR(s) lie at r < ∼ 1.2 kpc, and the OLR at r ∼ 6.8 kpc. Figure 19 shows the logarithmic ratio of the measured H i gas density, ΣHI, to the critical density, Σcrit, for all analysed U V -rich clusters together and within the previously defined distinct regions. Taking all clusters, a peak is found at < log(ΣHI/Σcrit) >= 0.06, indicating that local star formation is, on average, happening at the local critical density. The measured H i densities are slightly higher than critical along Arm 1 (0.33) and within the NW debris (0.17), but significantly lower than critical in the inner star-forming ring (-0.44) where the molecular gas density must be high. Finally, Fig. 20 shows -on a logarithmic scale -the SF RFUV density versus the gas density for U V -rich clusters in the NGC 1512/1510 system (derived here) and in the nearby Sbc galaxy M 51 (Kennicutt et al. 2007; for r < ∼ 8 kpc). Because no molecular data are currently available for NGC 1512, only the H i gas density is shown. Regions within the inner star-forming ring of NGC 1512 (located at r = 90 ′′ ) are -as stated before -significantly offset. Tripling the H i mass of each region achieves an approximate alignement. For M 51, molecular data are taken into account and are found to be essential for the observed correlation (Kennicutt et al. 2007). Dong et al. (2008) found that the U V -selected regions in two small fields within the large gaseous disk of M 83, ∼20 kpc from its centre, follow a similar trend. Molecular gas was not taken into account (for comparison, see Martin & Kennicutt 2001). Global chemical properties Here we estimate the metallicities of NGC 1512 and NGC 1510 using data available in the literature. Calzetti et al. (2007) and Moustakas & Kennicutt (2006) give an oxygen abundance between 8.37 and 8.81, in units of 12+log(O/H), for the chemical abundance of NGC 1512. The first value was derived using optical spectroscopy and the Pilyugin & Thuan (2005) calibration. The second value was obtained comparing with the predictions given by the photoionised evolutionary synthesis models provided by Kobulnicky & Kewley (2004). However, some recent analysis using direct estimates of the electron temperature (Te) of the ionised gas (e.g., López-Sánchez 2006) suggest that these models overestimate the oxygen abundance by ∼0.2 dex. We conclude that the metallicity of NGC 1512 is between 8.4 and 8.6, slightly lower than for the Milky Way, but within the range typical observed for spiral galaxies (Henry & Worthey 1999). On the other hand, we have used the emission line intensity data for NGC 1510, provided by Storchi-Bergmann et al. (1995), to compute its chemical abundance. We used the HαHβ ratio to correct the data for reddening and ob- Figure 20. Relation between SF R FUV /area and the H i gas density for U V -rich clusters in the NGC 1512/1510 system (derived here) and the galaxy M 51 (from Kennicutt et al. 2007). As before, different symbols indicate distinct regions within the NGC 1512/1510 system. The F U V −N U V colors range from -0.1 (black) to +0.55 (red). The solid line is the best fit to the M 51 data (Kennicutt et al. 2007); the dashed line is the relation for whole galaxies derived by Kennicutt (1998). The gas density for M 51 was derived combining atomic and molecular gas measurements, whereas we only use the H i gas for NGC 1512. tained C(Hβ) = 0.54. With the help of tasks in the iraf 'nebular package', we compute Te = 15700 K, using the [O iii] λ5007/λ4363 ratio. Assuming an electron density of ne = 100 cm −3 we compute the ionic abundances for O + /H + and O ++ /H + and derive a total oxygen abundance of 12+log(O/H) = 7.95 (∼0.2 Z⊙), typical for BCD galaxies. The large metallicity difference between NGC 1512 and NGC 1510 indicates that both galaxies have experienced a very different chemical evolution, and that NGC 1510 has been in a quiescent state for a long time while NGC 1512 was forming stars continuously. The N/O ratio found in NGC 1510, log(N/O) ≈ -1.2, is rather high for a galaxy with its oxygen abundance. For comparison, Izotov & Thuan (2004) typically obtain log(N/O) ≈ -1.5 for low metallicity BCD galaxies. Similar results are also found for other BCD galaxies with a significant population of WR stars (Brinchmann et al. 2008). It is thought that the nitrogen enrichment is a consequence of a very recent chemical pollution event probably connected with the onset of WR winds (López-Sánchez et al. 2007). It may also be related to the interaction between galaxies (Pustilnik et al. 2004, López-Sánchez et al. 2009), but deeper optical spectroscopic data with a higher spectral resolution are needed to confirm this issue. Interaction-induced star formation Star formation depends on the gravitational instability of galaxy disks, both locally and globally. Minor mergers and tidal interactions affect the gas distribution and dynamics of galaxies, leading to the formation of bars, gas inflow as well as the ejection of gas, and -as a consequence -locally en-hanced star formation. Together, these phenomena are key ingredients to the understanding of galaxy evolution. The often extended H i envelopes of spiral galaxies are particularly useful as sensitive tracers of tidal interactions and gas accretion. The gas distribution and dynamics are easily influenced by the environment, resulting in asymmetries, line broadening and/or splitting etc. The development of a strong two-armed spiral pattern and star-forming regions in disk galaxies (here NGC 1512) which accrete low-mass dwarf companions (here NGC 1510) has been explored by Mihos & Hernquist (1994) using numerical simulations. Their models, which use a mass ratio of 10:1 for the disk galaxy and its companion, resemble the galaxy pair NGC 1512/1510 after ∼40 time units (i.e. 5.2×10 8 years). At that stage, the model disk galaxy has developed a pronounced, slightly asymmetric two-armed spiral pattern with significant star-formation along the arms and the nuclear region. Minor mergers are common. The Milky Way and the Andromeda galaxy are prominent examples; both have many satellites and show evidence for continuous accretion of small companions. The multitude of stellar streams detected in our Galaxy as well as some other galaxies (e.g., NGC 5907, Martinez-Delgado et al. 2008) are hinting at a rich accretion history. Minor mergers contribute significantly to galaxy assembly, accretion, and evolution. CONCLUSIONS We analysed the distribution and kinematics of the H i gas as well as the star formation activity in the galaxy pair NGC 1512/1510 and its surroundings. For the barred, double-ring galaxy NGC 1512 we find a very large H i disk, about four times the optical diameter, with two pronounced spiral arms, possibly tidally induced by the interaction with the neighbouring blue compact dwarf galaxy NGC 1510. It is possible that the interaction also triggered the formation of the bar in NGC 1512 (unless the bar already existed, maybe from a previous accretion or interaction event) which would then cause gas to fall towards the nuclear regions, feeding the star formation, as well as induces torques in the outer spiral arms. We detect two tidal dwarf galaxies with H i masses of < ∼ 10 7 M⊙ and clear signs of star formation in the outer-most regions of the system. The most distant TDG, N1512-west, is rather compact and lies at a distance of ∼80 kpc from the centre of NGC 1512, potentially at the tip of an extra polated eastern H i arm of NGC 1512. The second TDG, 1512-south, is forming within an extended H i cloud, and is located slightly closer (64 kpc), within the extrapolated eastern H i arm. We regard these two TDGs as typical with respect to their H i mass, star forming activity and detachment from the interacting system. While TDGs are often found in major mergers, we find that they can form in mildly interacting system such as NGC 1512/1510. In this case, the interaction is effectly an accretion of a blue compact dwarf galaxy (NGC 1510) by the large spiral galaxy NGC 1512. NGC 1512 hosts an extended U V disk with > ∼ 200 of clusters with recent star formation activity. The comparison of our H i map with the GALEX images clearly shows that these clumps are located within the maxima of neutral gas density. We have derived the ages and star formation rates of the U V -rich clusters. We find that generally only the youngest U V clusters are associated with high H i column densities, while in older U V clusters only diffuse H i gas is detected. This might suggest that as the hydrogen gas depleted, star formation stopped in the latter regions. As a consequence we expect to detect Hα emission in all high density H i regions or equivalent in all young U V clusters. Our analysis supports a scenario in which the interaction between the BCD galaxy NGC 1510 and the large spiral galaxy NGC 1512 has triggered star formation activity in the outskirts of the disk and enhanced the tidal distortion in the H i arms. The interaction seems to occur in the north western areas of the system because of the broadening of the H i arm and the spread of the U V -rich star clusters in this region. The system is probably in the first stages of a minor merger which started ∼400 Myr ago. OUTLOOK Future H i surveys, such as those planned with the Australian SKA Pathfinder (ASKAP; Johnston et al. 2008) will produce similar H i cubes and images than obtained here, but over much larger areas. E.g., the proposed shallow ASKAP H i survey of the sky will reach a sensitivity of ∼1 mJy beam −1 at an angular resolution of 30 ′′ in a 12-h integration per field. Focal plane arrays will provide a very large, instantaneous field-of-view of 5. • 5 × 5. • 5. This means that H i images similar to those shown in this paper will be obtained for the entire Local Volume. Furthermore, the correlator bandwidth of 300 MHz (divided into 16,000 channels) will allow us to study the H i content of galaxies and their surroundings out to ∼60,000 km s −1 (z = 0.2). In addition, very deep 20-cm radio continuum images are obtained for the same area.
2009-08-28T03:43:49.000Z
2009-08-28T00:00:00.000
{ "year": 2009, "sha1": "2bb1149ad5b620558871a1dad57ad1e84e91eea1", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/400/4/1749/5649951/mnras0400-1749.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "cb6821cc2f95446f2396e8cbb8af9a96f4893cf2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
17195650
pes2o/s2orc
v3-fos-license
Spatio-Temporal Distribution Characteristics and Trajectory Similarity Analysis of Tuberculosis in Beijing, China Tuberculosis (TB) is an infectious disease with one of the highest reported incidences in China. The detection of the spatio-temporal distribution characteristics of TB is indicative of its prevention and control conditions. Trajectory similarity analysis detects variations and loopholes in prevention and provides urban public health officials and related decision makers more information for the allocation of public health resources and the formulation of prioritized health-related policies. This study analysed the spatio-temporal distribution characteristics of TB from 2009 to 2014 by utilizing spatial statistics, spatial autocorrelation analysis, and space-time scan statistics. Spatial statistics measured the TB incidence rate (TB patients per 100,000 residents) at the district level to determine its spatio-temporal distribution and to identify characteristics of change. Spatial autocorrelation analysis was used to detect global and local spatial autocorrelations across the study area. Purely spatial, purely temporal and space-time scan statistics were used to identify purely spatial, purely temporal and spatio-temporal clusters of TB at the district level. The other objective of this study was to compare the trajectory similarities between the incidence rates of TB and new smear-positive (NSP) TB patients in the resident population (NSPRP)/new smear-positive TB patients in the TB patient population (NSPTBP)/retreated smear-positive (RSP) TB patients in the resident population (RSPRP)/retreated smear-positive TB patients in the TB patient population (RSPTBP) to detect variations and loopholes in TB prevention and control among the districts in Beijing. The incidence rates in Beijing exhibited a gradual decrease from 2009 to 2014. Although global spatial autocorrelation was not detected overall across all of the districts of Beijing, individual districts did show evidence of local spatial autocorrelation: Chaoyang and Daxing were Low-Low districts over the six-year period. The purely spatial scan statistics analysis showed significant spatial clusters of high and low incidence rates; the purely temporal scan statistics showed the temporal cluster with a three-year period from 2009 to 2011 characterized by a high incidence rate; and the space-time scan statistics analysis showed significant spatio-temporal clusters. The distribution of the mean centres (MCs) showed that the general distributions of the NSPRP MCs and NSPTBP MCs were to the east of the incidence rate MCs. Conversely, the general distributions of the RSPRP MCs and the RSPTBP MCs were to the south of the incidence rate MCs. Based on the combined analysis of MC distribution characteristics and trajectory similarities, the NSP trajectory was most similar to the incidence rate trajectory. Thus, more attention should be focused on the discovery of NSP patients in the western part of Beijing, whereas the northern part of Beijing needs intensive treatment for RSP patients. Introduction Tuberculosis (TB) is a chronic infectious disease caused by Mycobacterium tuberculosis [1]. TB is primarily transmitted through the respiratory tract, and TB patients serve as the main source of infection. TB is also an important public health issue in China [2]; the fifth national sample survey of epidemic diseases conducted in 2010 showed that reported TB patients always formed the front rank of reported cases of national Class A and B epidemic diseases [3]. Considerable pressure has built for the prevention and control of TB, especially in cities because of rapid urbanization and economic transition [4]. Most epidemic data have spatial attributes, which has aroused the interest of researchers with both medical and geographic backgrounds. With the development of Geographic Information System (GIS) technology and its application in epidemiology, a new branch of epidemiology called spatial epidemiology has formed. Spatial epidemiology uses spatial information to extend the analysis of epidemic diseases [5]. With the aid of spatial statistics and mapping visualization, spatial epidemiology attempts to describe and analyse the spatial distribution of human diseases, health conditions and latent factors [6][7][8][9][10][11]. Spatial epidemiology also explores the spatial distribution model to predict the spatio-temporal trends of disease and the correlation between a disease and its latent factors [12][13][14]. All of this information can be used to monitor and prevent disease, process outbreaks and allocate medical resources in an evidence-based manner [15]. Disease mapping can present distributions intuitively. This method was first used in the investigation of a cholera outbreak caused by street water pump pollution in London in 1854. However, it explores disease only qualitatively. Quantitative methods, such as spatial autocorrelation, can objectively describe the distribution characteristics of disease, and scan statistics can describe the clustering characteristics in terms of both space and time [16][17][18]. Analysing the clustering characteristics of a disease can help detect hot spots and high-risk groups in space and time, which in turn helps decision makers formulate specific prevention and treatment policies [19]. Trajectory data contain the sequence of location and time information for moving objects. Trajectory similarity analysis has been used in fields such as transport logistics, human behaviour, and marketing management [20]. From the perspective of spatial epidemiology, the changing time sequence of spatial statistics, such as the mean centre, nearest-neighbour distance and spatial correlation coefficient, can be regarded as trajectory data [21,22]. New smear-positive (NSP) TB indicates new disease occurrence, whereas retreated smear-positive (RSP) TB is more resistant to drugs. Because both types of TB are infectious, it is necessary to analyse the distribution difference between the overall incidence rates of TB and the incidence rates for these two categories [23]. The analyses can also reflect variations in the control effect among districts and loopholes in prevention strategies. The current study applied the Euclidean distance algorithm to measure the similarities between trajectories over the full time interval. However, because we were interested in the occurrence of disease close to the current year, weights were assigned to different times to modify the algorithm. The present study aimed to clarify the spatial and temporal distribution characteristics of TB at the district level in Beijing, China from 2009 to 2014. We utilized spatial statistics, spatial autocorrelation and scan statistics analyses to describe the distribution characteristics and clustering characteristics of TB. We evaluated the trajectory similarities between TB and NSP TB/RSP TB over a five-year period with the intent of detecting the differential distribution of these three categories and the similarity variations over time. The study investigated (1) the spatial and temporal trends in TB from 2009 to 2014; (2) the global spatial autocorrelation of overall TB across the districts of Beijing and the local spatial autocorrelation of TB in individual districts of Beijing; (3) the purely spatial, purely temporal and spatio-temporal clusters and variations of TB; and (4) trends in the mean centres of TB, NSP TB and RSP TB and the trajectory similarities of TB and NSP TB/RSP TB over a five-year period. Study Area Beijing (the capital of China) is the nation's political, cultural centre and economic centre. Located in the northwestern portion of the North China Plain, Beijing is adjacent to Tianjin on the southeast and Hebei province in the other directions. Beijing was divided into 16 administrative districts in July 2010, and these districts are still in place today. According to their different urban functions, these districts can be classified into four regions: the Core Districts of Capital Function (Dongcheng and Xicheng), the Urban Function Extended Districts (Chaoyang, Fengtai, Shijingshan, and Haidian), the New Districts of Urban Development (Fangshan, Tongzhou, Shunyi, Changping, and Daxing), and the Ecological Preservation Development Districts (Mentougou, Huairou, Pinggu, Miyun, and Yanqing) ( Figure 1). The study area included the 16 districts under the administration of Beijing according to the latest administrative divisions. Study Area Beijing (the capital of China) is the nation's political, cultural centre and economic centre. Located in the northwestern portion of the North China Plain, Beijing is adjacent to Tianjin on the southeast and Hebei province in the other directions. Beijing was divided into 16 administrative districts in July 2010, and these districts are still in place today. According to their different urban functions, these districts can be classified into four regions: the Core Districts of Capital Function (Dongcheng and Xicheng), the Urban Function Extended Districts (Chaoyang, Fengtai, Shijingshan, and Haidian), the New Districts of Urban Development (Fangshan, Tongzhou, Shunyi, Changping, and Daxing), and the Ecological Preservation Development Districts (Mentougou, Huairou, Pinggu, Miyun, and Yanqing) ( Figure 1). The study area included the 16 districts under the administration of Beijing according to the latest administrative divisions. TB Data The TB data for each administration district from 2009 to 2014 were obtained from the Beijing Health and Population Health Status Reports [24][25][26][27][28]. These data were published by the Beijing Centre for Disease Prevention and Control to provide open medical and health information. These data include the number of NSP TB patients, the number of RSP TB patients, the number of smear-negative TB patients, the number of TB patients who did not receive a sputum smear examination, the number of TB pleurisy patients and the total number of TB patients for each administrative district. Because China has only recently begun to share medical data with the public, we only had access to six years of data. However, the data from 2009 only provide the total number of TB patients and does not include detailed information about the number of NSP, RSP and other categories of TB patients. Therefore, except in the trajectory similarity analysis, we used the data from 2009 to 2014. In the trajectory similarity analysis, we used the data from 2010 to 2014 because the 2009 data are not detailed enough to be suitable for this analysis. The TB data were available at the temporal resolution of year and at the spatial resolution of administrative district. TB Data The TB data for each administration district from 2009 to 2014 were obtained from the Beijing Health and Population Health Status Reports [24][25][26][27][28]. These data were published by the Beijing Centre for Disease Prevention and Control to provide open medical and health information. These data include the number of NSP TB patients, the number of RSP TB patients, the number of smear-negative TB patients, the number of TB patients who did not receive a sputum smear examination, the number of TB pleurisy patients and the total number of TB patients for each administrative district. Because China has only recently begun to share medical data with the public, we only had access to six years of data. However, the data from 2009 only provide the total number of TB patients and does not include detailed information about the number of NSP, RSP and other categories of TB patients. Therefore, except in the trajectory similarity analysis, we used the data from 2009 to 2014. In the trajectory similarity analysis, we used the data from 2010 to 2014 because the 2009 data are not detailed enough to be suitable for this analysis. The TB data were available at the temporal resolution of year and at the spatial resolution of administrative district. Population Data The annual population data for each administrative district from 2009 to 2014 were obtained from the Beijing Statistical Information Net [29]. We used the resident population that had lived in Beijing for more than six months of the last year as the population for this study. Spatial District Data Fundamental geographic data with a scale of 1:4,000,000 were obtained from the National Geomatics Centre of China [30]. To display the spatial distribution of TB and to perform the spatial analysis, the TB data and population data of each administrative district were imported into the attribute table of spatial district data. The data were double-checked to prevent errors. We used the administrative centre of each district to represent each district in the scan statistics analysis, and the longitude and latitude of the centres were used in trajectory similarity analysis. Because we assumed that more people lived and worked near the city than in the suburbs, especially in a cosmopolitan city such as Beijing, we used the administrative centres to better reflect the centre of human activities for each district. The longitude and latitude of the centres are available from the network [31]. Registration Rate and Incidence Rate Calculations The registration rate (rr) is expressed as the observed and registered number of TB patients per 100,000 residents using the total population of the corresponding district as the standard. This rate can be described as follows: where O i and N i are the total number of TB patients (including the number of NSP TB patients, the number of RSP TB patients, the number of smear-negative TB patients, the number of TB patients who did not undergo a sputum smear examination, and the number of TB pleurisy patients) and the total population in the ith district per year, respectively. The incidence rate (IR) is used to represent the disease risk across Beijing in this study, to identify districts with higher or lower disease risks and to capture the temporal and spatial clusters. In 2010, Beijing provided TB patients with free examinations for the first time and fully covered directly observed treatment + short-course chemotherapy (DOTS) [32]. Therefore, in this study, we postulated that the IR approximately equalled the rr. Spatial Autocorrelation Analysis All attribute values on a geographic surface are related to one another, but closer values are more strongly related than more distant values [33]. The spatial autocorrelation analysis by GeoDa (Arizona State University, Phoenix, AZ, USA) is used to test whether there is interdependence and to determine the level of interdependence between the same attribute values of one spatial unit and its neighbouring units [34]. Global spatial autocorrelation analysis reflects the autocorrelation across the whole area but cannot reflect the local distribution characteristics of the attribute value and its contribution to the global autocorrelation. Therefore, local spatial autocorrelation analysis is used to detect the autocorrelation of each spatial district and its variation across the area. Constructing a spatial weight matrix that reflects the spatial adjacent correlation among spatial districts is the first step in the spatial autocorrelation analysis [35]. In this study, we used the binary spatial weight matrix, which is the most commonly used matrix in practice. It can be described as follows: where W ij is the element of the spatial weight matrix reflecting the spatial adjacent correlation between different spatial districts A i and A j . When A i shares common boundaries with A j , W ij equals 1. Otherwise, W ij equals 0, including the condition in which A i shares no common boundaries with A j and when i equals j. The most commonly used statistic to measure autocorrelation is Moran's I. The global Moran's I statistic ranges from´1 to 1. I ą 0 indicates positive autocorrelation. I ă 0 indicates negative autocorrelation. I " 0 indicates no autocorrelation. When |I| is larger, the autocorrelation is higher. The global Moran's I can be calculated as follows [5]: where n is the number of districts, y i and y j are the IR values of spatial districts i and j, y is the average IR of all districts, and W ij is the element of the spatial weight matrix corresponding to the district pair i and j. The local Moran's I satisfies two conditions: the local indication of spatial autocorrelation (LISA) reflects the clustering level between one spatial district and its neighbouring districts, and the sum of all LISA values is proportional to the global Moran's I. There are four spatial correlation modes: the spatial district with a high IR surrounded by districts with high IR values (High-High); the spatial district with a low IR surrounded by districts with high IR values (Low-High); the spatial district with a low IR surrounded by districts with low IR values (Low-Low); and the spatial district with a high IR surrounded by districts with low IR values (High-Low). For spatial district i, LISA can be calculated as follows: A larger |I i | indicates higher clustering level in the ith district. If I i is positive, the ith district is the area where the lever of incidence is similar to the surrounding areas (High-High or Low-Low). In contrast, if I i is negative, the ith district is the area that is dissimilar to the surrounding areas (High-Low or Low-High). If |I i | is close to 0, the occurrence of TB is randomly distributed, and there is no clustering phenomenon. Scan Statistics Analysis Scan statistics analysis, a retrospective statistical test based on a discrete Poisson model performed by SaTScan (Martin Kulldorff, Boston, MA, USA), is used to detect whether the IR of TB shows clustering characteristics and to determine the location and relative risk (RR) of the clusters. A pure spatial scan statistics analysis is defined by a circular window with a radius that varies continuously according to the population range of the area. The radius moves throughout the study area to detect several cluster centroids from zero to the maximum cluster size of the total population that might be at risk. Purely temporal scan statistics analysis is similar to the purely spatial scan statistics analysis; however, the scan range is a time period. Space-time scan statistics analysis incorporates the time dimension and is defined by a cylindrical window with a geographic base and a height corresponding to time [36]. In this study, the default maximum spatial cluster size of 50% was selected for the cluster analysis. Furthermore, the log likelihood ratio (LLR) was used to calculate the difference in the incidence inside and outside the windows [37]: where O in and E in denote the numbers of actual and expected cases in the window, respectively. E in is calculated by multiplying the general IR of Beijing by the population of the ith district and can be expressed as follows: where N i is the total population of the ith district. G IR is the general IR of Beijing, which can be described as follows: where n is the number of districts administered by Beijing, and O j and N j are the number of observed cases and the population in the jth (j " 1, 2, . . . , n) district, respectively. The most likely cluster is the scan window with the largest LLR value, and the secondary clusters are the other scan windows with significant LLR values. The TB patients and populations of each district in each year and the coordinates of each district were included to obtain the most likely cluster in which the districts and time frame had the largest LLR and the maximum RR. Trajectory Similarity Analysis The mean centre (MC) tool in ArcGIS (Environmental Systems Research Institute, Redlands, CA, USA) was used to identify the spatio-temporal change in TB in Beijing from 2009 to 2014. The MC identifies the geographic centre of a set of points to measure the central tendency, which is calculated as follows: where MC t denotes the coordinates of the MC in the tth (t " 1, 2, . . . , m) year, n is the number of points over the study area in the tth year, and x j and y j are the coordinates of the jth (j " 1, 2, . . . , n) point in the tth year. The IR MCs of the same geographic area in a time series could reveal the movement of the IR central tendency. The MCs of NSP TB patients in the resident population (NSPRP), NSP TB patients in the TB patient population (NSPTBP), RSP TB patients in the resident population (RSPRP) and RSP TB patients in the TB patient population (RSPTBP) from 2010 to 2014 were calculated to identify and compare the yearly movement of the central tendency. A trajectory is a serial record of spatial locations of moving objects with time attributes. The central tendencies of IR, NSPRP, NSPTBP, RSPRP and RSPTBP over time can be regarded as a type of trajectory. The Euclidean distance between different tracks can be used to measure the trajectory similarities of IR and the other four categories' central tendencies. The Euclidean distance between tracks is based on the Euclidean distance between points. First, the distance between points is calculated using the same time, and the sum of these distances is then calculated. The Euclidean distance of two tracks can be calculated as follows: where distpMC1, MC2q is the Euclidean distance of two different tracks MC1 and MC2. When the distance is smaller, the trajectory similarity between them is higher. The total points of each track are the same and equal m. p t is the weight for different point pairs according to certain rules. Because the situation is very similar for adjacent years, we assume that the IR of current year is more strongly correlated with the IR of the previous year than with that of the year before last, three years ago and so on. Therefore, because we focused on the IR close to the current year, we assigned a greater weight to the most recent year. For exponential function, a should meet the condition a ą 0 and a ‰ 1. When 0 ă a ă 1, the curve of trend component is a monotonically decreasing function; when a ą 1, the curve of trend component is a monotonically increasing function. Therefore, we used the exponential function model (0 ă a ă 1) to give the point pairs different weights and unitized the results, which can be expressed as follows: where k " 0, 1, 2¨¨¨m´1 starting from the current year. Because the curve is characterized by a sharper slope when a is close to 0. If a were small, the weights for years far from current year would be too small. Therefore, we assigned 0.5 ď a ă 1. We generated 1000 random numbers between 0.5 and 1, than performed point estimation in the sample, and the result showed a " 0.75. Therefore, we established five gradient values for a (a " 0.55, a " 0.65, a " 0.75, a " 0.85 and a " 0.95) which were symmetrical by 0.75. However, the number of gradient values is not fixed as long as the selected numbers are symmetrical by 0.75. MC1 t and MC2 t are the point pairs of the track MC1 and the track MC2 in the tth year, respectively. distpMC1 t , MC2 t q is the Euclidean distance between the point pairs MC1 t and MC2 t , which can be calculated as follows: where X1 t and X2 t can be calculated using Equation (9) and Y1 t and Y2 t can be calculated using Equation (10). The unit of distance is the kilometre (km). Spatio-Temporal Distribution The temporal distribution of IRs for each district in Beijing from 2009 to 2014 is shown in Figure 2. The colour of each district represents the average IR related to TB from 2009 to 2014, and the bar charts illustrate the annual IR from 2009 to 2014 (i.e., IR_2009 to IR_2014) for each district. We used the natural break method to appropriately cluster similar values from a set of data and to maximize the gap between groups to classify the IRs into five groups. The largest average IRs of TB were in Mentougou (51.0/100,000) and Xicheng (45.5/100,000); the next largest average IRs of TB were in Miyun (32.5/100,000), Fangshan (28.9/100,000) and Shunyi 2008. The total average IR of TB overall across the districts of Beijing over six years was 21.7/100,000, which was lower than the average IR of TB in China as a whole. The first time that the average IR of TB overall across the districts of Beijing was smaller than 20/100,000 was in 2013. We performed two GLM analyses on the six-year data using SPSS (IBM, Armonk, NY, USA). One compared if TB IRs have difference in different years (looking for temporal trend), and the other compared if TB IRs have difference if different districts (looking for spatial trend). In the first analysis, we considered year (six years from 2009 to 2014) as the fix factor. The results showed that IRs for different years have no difference (p = 0.528). In the latter analysis, we considered district (16 districts in Beijing totally) as the fix factor. The results showed that IRs for different districts have difference (p = 0.000). Sixteen districts in Beijing were divided into five subsets: (1) The cause of the decline in the IRs from 2009 to 2014 may be that the Chinese government has made great efforts to prevent and control TB [38]. The State Council issued documents concerning The cause of the decline in the IRs from 2009 to 2014 may be that the Chinese government has made great efforts to prevent and control TB [38]. The State Council issued documents concerning The cause of the decline in the IRs from 2009 to 2014 may be that the Chinese government has made great efforts to prevent and control TB [38]. The State Council issued documents concerning the National Tuberculosis Control Program (2001-2010) and the National Tuberculosis Control Program (2011-2015) [39], and implemented the DOTS policy at the county level. These policies can detect the source of infection abundantly and directly, can often cure newly discovered patients without the need for hospitalization, can alleviate the financial burden on patients because it costs much less, and can decrease the occurrence of drug-resistant TB [40]. DOTS has made remarkable achievements and has effectively suppressed the increasing tendency of TB. The Chinese government has delivered many types of education and awareness activities (especially on World Tuberculosis Day every year) to improve public awareness of TB prevention. Beijing, which is the nation's political, cultural and economic centre, has good economic and medical conditions. Since 1979, Beijing has provided free treatment to a portion of TB patients every year. In recent years, the Beijing government expanded the range of free treatments to include TB patients who were non-permanent residents for the first time and alleviated the inspection charge to a certain extent. Moreover, in addition to the Beijing Research Institute of Tuberculosis Control, there are TB prevention and control institutes in each district that provide TB patients with convenient treatment. By expanding its prevention and control network, Beijing deepened the prevention and control responsibilities of the community service centres to strengthen infection control efforts. Spatial Autocorrelation Analysis The result shows that the global Moran's I (´0.1198) was negative and failed to pass the significance level test (p = 0.44), indicating that there was no global spatial autocorrelation and that the occurrence of TB was distributed randomly from 2009 to 2014. The results of the local autocorrelation analysis are mapped in Figure 4. The LISA cluster map shows that the Chaoyang and Daxing districts were Low-Low districts over the six-year study period, indicating that the IRs of Chaoyang and Daxing were low and the IRs of their neighbouring districts were also low. Other districts (in grey) did not show any local spatial autocorrelation characteristics. The results all passed significance level testing (p < 0.05). From these results, we might surmise that the disease was not spreading in Chaoyang and Daxing because there was no influx of infection from their neighbouring districts. The overall occurrence of TB across the districts of Beijing was relatively low and steady over the six-year period, and there were no outbreaks, indicating that the efforts of the Beijing government were successful. [39], and implemented the DOTS policy at the county level. These policies can detect the source of infection abundantly and directly, can often cure newly discovered patients without the need for hospitalization, can alleviate the financial burden on patients because it costs much less, and can decrease the occurrence of drug-resistant TB [40]. DOTS has made remarkable achievements and has effectively suppressed the increasing tendency of TB. The Chinese government has delivered many types of education and awareness activities (especially on World Tuberculosis Day every year) to improve public awareness of TB prevention. Beijing, which is the nation's political, cultural and economic centre, has good economic and medical conditions. Since 1979, Beijing has provided free treatment to a portion of TB patients every year. In recent years, the Beijing government expanded the range of free treatments to include TB patients who were non-permanent residents for the first time and alleviated the inspection charge to a certain extent. Moreover, in addition to the Beijing Research Institute of Tuberculosis Control, there are TB prevention and control institutes in each district that provide TB patients with convenient treatment. By expanding its prevention and control network, Beijing deepened the prevention and control responsibilities of the community service centres to strengthen infection control efforts. Spatial Autocorrelation Analysis The result shows that the global Moran's I (−0.1198) was negative and failed to pass the significance level test (p = 0.44), indicating that there was no global spatial autocorrelation and that the occurrence of TB was distributed randomly from 2009 to 2014. The results of the local autocorrelation analysis are mapped in Figure 4. The LISA cluster map shows that the Chaoyang and Daxing districts were Low-Low districts over the six-year study period, indicating that the IRs of Chaoyang and Daxing were low and the IRs of their neighbouring districts were also low. Other districts (in grey) did not show any local spatial autocorrelation characteristics. The results all passed significance level testing (p < 0.05). From these results, we might surmise that the disease was not spreading in Chaoyang and Daxing because there was no influx of infection from their neighbouring districts. The overall occurrence of TB across the districts of Beijing was relatively low and steady over the six-year period, and there were no outbreaks, indicating that the efforts of the Beijing government were successful. Scan Statistics Analysis The investigation of the purely spatial scan statistics analysis of high and low IR at the district level (Table 1) revealed that the first rank cluster contained only the Xicheng district, which had a high IR. The second rank cluster included one low IR district: Chaoyang. The third rank cluster contained two low IR districts: Shijingshan and Fengtai. The fourth and fifth rank clusters had high IRs: the fourth contained Mentougou, and the fifth contained Miyun, Huairou, Shunyi and Pinggu. All of the results passed significance level testing (p = 0.001). The distribution of high and low IR spatial clusters in Beijing from 2009 to 2014 is showed in Figure 5. The Xicheng district ranked first, with a high IR over the six-year period. One possible explanation is that Xicheng had a large population and a high population density, resulting in crowded living conditions, bad air ventilation and poor sanitary conditions, which all contribute to TB transmission. The growth of floating populations in Xicheng was not significant, but it was the district with the densest floating population. Moreover, the majority of the floating population in this district engaged in the commercial service industry, which was liquid and offered a greater chance for contact with people. Once TB gained a foothold, the disease could be transmitted to the exposed population easily, leading to extensive transmission. Health 2016, 13, 291 10 of 17 Scan Statistics Analysis The investigation of the purely spatial scan statistics analysis of high and low IR at the district level (Table 1) revealed that the first rank cluster contained only the Xicheng district, which had a high IR. The second rank cluster included one low IR district: Chaoyang. The third rank cluster contained two low IR districts: Shijingshan and Fengtai. The fourth and fifth rank clusters had high IRs: the fourth contained Mentougou, and the fifth contained Miyun, Huairou, Shunyi and Pinggu. All of the results passed significance level testing (p = 0.001). The distribution of high and low IR spatial clusters in Beijing from 2009 to 2014 is showed in Figure 5. The Xicheng district ranked first, with a high IR over the six-year period. One possible explanation is that Xicheng had a large population and a high population density, resulting in crowded living conditions, bad air ventilation and poor sanitary conditions, which all contribute to TB transmission. The growth of floating populations in Xicheng was not significant, but it was the district with the densest floating population. Moreover, the majority of the floating population in this district engaged in the commercial service industry, which was liquid and offered a greater chance for contact with people. Once TB gained a foothold, the disease could be transmitted to the exposed population easily, leading to extensive transmission. The second and third rank clusters were located in the Urban Function Extended Districts, including Chaoyang, Shijingshan and Fengtai. One possible explanation for this formation of distribution characteristics among the low-IR clusters is that although the Urban Function Extended Districts were the main clusters of the floating population (i.e., the floating populations of Chaoyang, Haidian and Fengtai comprised more than 50% of the total floating population of Beijing, with a downward trend), the floating population of the New Districts of Urban Development showed apparent growth and trended toward movement from the Urban Function Extended Districts to the New Districts of Urban Development (i.e., the floating populations of Daxing, Tongzhou, Changping and Shunyi comprised more than 20% of the total floating population of Beijing, with an upward trend). Furthermore, the floating population of the Urban Function Extended Districts primarily comprised fixed groups, such as students and construction workers, who live in certain areas with a limited scope of communication activities. This phenomenon decreases both the chances for individuals to contact others and the possibility of transmission. The investigation of the purely temporal scan statistics analysis of high and low IR at the district level (Table 2) revealed that there was only one significant cluster. The first rank cluster showed that the IR was high over the three-year period from 2009 to 2011. The result was consistent with the downward trend in the IR over the six-year study period. The result passed significance level testing (p = 0.001). The space-time scan statistics analysis of high and low IRs (Table 3) revealed four significant clusters. The first rank cluster contained only the Xicheng district, which had a high IR from 2009 to 2011, whereas the second rank cluster contained two districts with low IRs, Dongcheng and Chaoyang, from 2012 to 2014. The third rank cluster included Shijingshan and Fengtai, with low IRs from 2012 to 2014. The fourth rank cluster contained four high IR districts: Huairou, Miyun, Shunyi and Changping (from 2009 to 2010). The distribution characteristics of the spatial clusters ( Figure 6) were consistent with the results of the purely spatial scan statistics analysis. The temporal cluster results were consistent with the downward trend in the IR over the six-year period. All of the results passed significance level testing (p = 0.001). Trajectory Similarity Analysis The MC analysis revealed that the majority of the MCs were in adjacent locations near the common boundaries of Chaoyang and Changping and Chaoyang and Haidian. The general distribution of the NSPRP MCs and NSPTBP MCs were to the east of the IR MCs (Figures 7 and 8). The general distribution of the RSPRP MCs and RSPTBP MCs were to the south of the IR MCs (Figures 9 and 10). Trajectory Similarity Analysis The MC analysis revealed that the majority of the MCs were in adjacent locations near the common boundaries of Chaoyang and Changping and Chaoyang and Haidian. The general distribution of the NSPRP MCs and NSPTBP MCs were to the east of the IR MCs (Figures 7 and 8). The general distribution of the RSPRP MCs and RSPTBP MCs were to the south of the IR MCs (Figures 9 and 10). Trajectory Similarity Analysis The MC analysis revealed that the majority of the MCs were in adjacent locations near the common boundaries of Chaoyang and Changping and Chaoyang and Haidian. The general distribution of the NSPRP MCs and NSPTBP MCs were to the east of the IR MCs (Figures 7 and 8). The general distribution of the RSPRP MCs and RSPTBP MCs were to the south of the IR MCs (Figures 9 and 10). The results of the trajectory similarity analysis revealed that from 2010 to 2014, the similarity between the IR and NSPRP (2.70) was the greatest; the similarity between the IR and NSPTBP (3.41) followed next; the similarity between the IR and RSPRP (4.46) sorted third; the similarity between the IR and RSPTBP (5.06) sorted lowest (Table 4). Because the NSP trajectory was the most similar to the IR trajectory and the NSP TB patients are newly discovered patients who could be latent and could transmit TB to others, more attention should be paid to the discovery of NSP TB patients in the western part of Beijing. Because RSP TB is still transmissible and has a greater possibility of becoming resistant to drugs, the southern part of Beijing may have a lower cure rate than the northern part; therefore, TB patients in the southern part require intensive treatment. Thus, health-related policies should be formulated to take measures to adjust these factors, and public health resources should be allocated more appropriately. By improving the identification of NSP TB patients and cure rate of RSP TB patients, the prevention of TB in Beijing would have better results. Conclusions This study investigated the distribution characteristics of spatio-temporal clusters of TB at the district level in Beijing from 2009 to 2014 using GeoDa and SaTScan software and ArcGIS tools; furthermore, it evaluated the trajectory similarities between IR and NSPRP/NSPTBP/RSPRP/RSPTBP over a five-year period to provide guidelines for the allocation of public health resources and to strengthen health-related policies. We found that the IRs of Beijing exhibited a gradual decrease from 2009 to 2014, possibly because of the Beijing government's considerable efforts to prevent and control TB. Global spatial autocorrelation was not observed across all of the districts of Beijing, indicating that the occurrence of TB was randomly distributed. However, there was a local spatial autocorrelation at the district level, with Chaoyang and Daxing as the Low-Low districts over the six-year period. The results of the trajectory similarity analysis revealed that from 2010 to 2014, the similarity between the IR and NSPRP (2.70) was the greatest; the similarity between the IR and NSPTBP (3.41) followed next; the similarity between the IR and RSPRP (4.46) sorted third; the similarity between the IR and RSPTBP (5.06) sorted lowest (Table 4). Because the NSP trajectory was the most similar to the IR trajectory and the NSP TB patients are newly discovered patients who could be latent and could transmit TB to others, more attention should be paid to the discovery of NSP TB patients in the western part of Beijing. Because RSP TB is still transmissible and has a greater possibility of becoming resistant to drugs, the southern part of Beijing may have a lower cure rate than the northern part; therefore, TB patients in the southern part require intensive treatment. Thus, health-related policies should be formulated to take measures to adjust these factors, and public health resources should be allocated more appropriately. By improving the identification of NSP TB patients and cure rate of RSP TB patients, the prevention of TB in Beijing would have better results. Conclusions This study investigated the distribution characteristics of spatio-temporal clusters of TB at the district level in Beijing from 2009 to 2014 using GeoDa and SaTScan software and ArcGIS tools; furthermore, it evaluated the trajectory similarities between IR and NSPRP/NSPTBP/RSPRP/RSPTBP over a five-year period to provide guidelines for the allocation of public health resources and to strengthen health-related policies. We found that the IRs of Beijing exhibited a gradual decrease from 2009 to 2014, possibly because of the Beijing government's considerable efforts to prevent and control TB. Global spatial autocorrelation was not observed across all of the districts of Beijing, indicating that the occurrence of TB was randomly distributed. However, there was a local spatial autocorrelation at the district level, with Chaoyang and Daxing as the Low-Low districts over the six-year period. The scan statistics analysis showed spatial, temporal and spatio-temporal clusters of high and low IR. The distribution of MCs showed that the general distributions of NSPRP MCs and NSPTBP MCs were to the east of the IR MCs. Conversely, the general distribution of RSPRP MCs and RSPTBP MCs were to the south of the IR MCs. Based on the combined analysis of the MC distribution characteristics and trajectory similarities, the trajectory of NSP TB was most similar to the trajectory of IR. Thus, more attention should be focused on identifying NSP TB patients in the western part of Beijing, whereas the southern part of Beijing needs to offer intensive treatment for RSP TB patients. These results can be used by urban public health officials and related decision makers to allocate public health resources and to formulate prioritized health-related policies. Because China has only recently begun to share medical data with the public, we only had access to six years of data. In addition, we only had the number of patients, but do not have access to more detailed information about patients such as their occupation, gender, age and so on. The TB data were available at the spatial resolution of administrative district. If we had the data at the spatial resolution of the subdistrict community, we could detect the distribution characteristics more specifically. This study only qualitatively analysed the causes of the TB distribution characteristics. Because of data limitations, it is hard to detect the latent factors leading to the distribution characteristics at this stage. Therefore, further studies are necessary to detect the risk factors. Furthermore, we used an exponential function to give the point pairs different weights. In future study, we would explore a more complicated and appropriate function to replace this basic function. These limitations will be investigated in the future.
2016-03-14T22:51:50.573Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "ef96ccb17d0422e26d7f5507ae6e6ef7bf61981c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/13/3/291/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef96ccb17d0422e26d7f5507ae6e6ef7bf61981c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Geography" ] }
57358171
pes2o/s2orc
v3-fos-license
Hemi-hysterectomy for placenta accreta in a bicornuate uterus Introduction This paper reports a case of hemihysterectomy for placenta accreta in a bicornuate uterus. Case report This is a case of a 29-year-old G3P1021 whose pregnancy was complicated by a bicornuate uterus, history of cervical incompetence with cerclage placement, and retained placenta in the right uterine horn after a term vacuum-assisted vaginal delivery. Magnetic resonance imaging demonstrated placenta increta in the right uterine horn and the patient underwent an abdominal supracervical hemi-hysterectomy and right salpingectomy. Conclusion Our patient’s three D&C procedures and her uterine anomaly likely contributed to her placenta accreta and need for this unique fertility preserving surgery. Introduction The incidence of hemi-hysterectomy during pregnancy is extremely rare.Two case reports identify this condition.One describes a hemi-hysterectomy for a rudimentary uterine horn pregnancy followed by two normal gestations 1 .The second was performed for a uterine rupture at an early gestation in a unicervical bicornuate uterus 2 .We report the case of a post-partum hemi-hysterectomy for a placenta accreta. Case report A 29-year-old G3P0020 at 39 weeks gestation presented in labour.The patient had a known bicornuate uterus with the pregnancy located in the right horn, and had a McDonald cerclage placed for a shortened cervix.The patient had three prior dilation and curettage (D&C) procedures.One was for a vaginal delivery of a 19-week spontaneous abortion complicated by retained products of conception (POC).Another was performed for an 8-week spontaneous abortion, followed by a third procedure for retained POC. The patient underwent a vacuumassisted vaginal delivery of a viable infant.The placenta did not deliver and an attempt at manual extraction was made, but no plane could be developed between the placenta and uterus.Bedside ultrasound revealed a question of placenta accreta.The patient was stable with no active vaginal bleeding and desired future fertility.Therefore, the decision was made to leave the placenta in situ and follow with expectant management. A magnetic resonance imaging of the pelvis showed indings consistent with placenta increta at the lateral aspect of the right uterine horn (Figure 1 and 2).On the second postpartum day, the patient experienced abdominal cramping, vaginal bleeding, and an eight percent drop in her haematocrit.She was taken to the operating room for a planned exploratory laparotomy and hysterectomy. Intra-operative inspection and consultation with an infertility specialist revealed the possibility of proceeding with a hemi-hysterectomy of the right uterine horn in an attempt for fertility preservation.The left horn and bilateral adnexa appeared normal.After a vasopressin injection between the two horns, the right uterine horn was extirpated without complication.The left horn was repaired in layers.Final pathology was consistent with placenta accreta.Post-operatively, she has resumed normal monthly menses. Discussion In a review of the distribution of Mullerian anomalies, the mean incidence of bicornuate uterus is 46% 3 .This anomaly is associated with both fertility and obstetric complications.Grimbizis et al. identi ied an overall spontaneous abortion rate of 36%, a preterm birth rate of 23%, a term delivery rate of 40.6% and a live birth rate of 55.2% 4 . Bicornuate uteri are also associated with a high incidence of cervical incompetence, approaching 38% 5 .In our patient, placement of an abdominal cerclage at the time of hemi-hysterectomy was discussed.However, we were unable to clearly demarcate the lower uterine segment from cervix. Prenatal diagnosis of a placenta accreta can be made by ultrasound; however there was no reported concern for placenta accreta in our patient.Risk factors associated with placenta accreta include prior uterine surgery, placenta previa, advanced maternal age, smoking, multiparity, short cesarean-to-conception interval, uterine irradiation and in-vitrofertilization 6 . Conclusion Our patient's three D&C procedures and her uterine anomaly likely contributed to her placenta accreta and need for this unique fertility preserving surgery. Figure 1 : Figure 1: Magnetic resonance imaging of the pelvis showing sagittal view of two uterine horns and retained placenta located in the right uterine horn. Figure 2 : Figure 2: Magnetic resonance imaging of the pelvis showing transverse view of the placenta at the lateral aspect of the right uterine horn, suspicious for increta.
2017-08-15T02:56:38.621Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "503ba3355e9b56ead729304fd0f68a9b40c2e9d8", "oa_license": "CCBY", "oa_url": "http://www.oapublishinglondon.com/images/article/pdf/1390442028.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "503ba3355e9b56ead729304fd0f68a9b40c2e9d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212786693
pes2o/s2orc
v3-fos-license
Research on Three-dimensional Innovative Application of Engraved Paper Sculpture and Mianzhu New Year Painting As an important part of Chinese folk art, paper sculpture has a long history and is rooted in folk soil. Engraved paper sculpture is a new art creation of sculpture. In terms of creative themes and expression techniques, it follows the strong local color and national style of folk art. With the progress of science and technology, folk art has been valued in the diversified development, and paper sculpture art also needs constant innovation. By combining on-the-spot investigation and referring to relevant literature, this paper gives an overall overview of the origin, development, history of paper sculpture art, engraved paper sculpture art, Mianzhu New Year painting and Sichuan Opera, and carries out multi-angle analysis and research on the type, theme, folk customs and implied meaning, line carving style, color and design modeling. Finally, through the summary and analysis, it puts forward the threedimensional exploration practice of engraved paper sculpture. Keywords—engraved paper sculpture; Mianzhu New Year painting; innovation INTRODUCTION According to the article 2 of "The Convention for the Safeguarding of the Intangible Cultural Heritage", intangible cultural heritage refers to the practices, performances, forms of expression, knowledge and skills of various communities, groups and sometimes individuals that are regarded as their cultural heritage, as well as related tools, objects, handicrafts and cultural sites. And in the "Interim Measures for the Application and Assessment of Representative Works of National Intangible Cultural Heritage", China officially defines intangible cultural heritage as "various forms of expression of traditional culture that people of all ethnic groups have inherited from generation to generation and are closely related to people's life, such as folk activities, performing arts, traditional knowledge and skills, as well as related utensils, physical objects, handicrafts and cultural space". China's acceding to the convention for the protection of intangible cultural heritage has created a sound environment for pushing the protection of intangible cultural heritage onto the track of legalization and standardization and protecting China's intangible cultural heritage. As an important part of intangible cultural heritage, paper sculpture art is the crystallization of the wisdom of the Chinese nation and has infinite artistic charm. As a new type of paper sculpture created in the new era, engraved paper sculpture has far-reaching significance for the inheritance and development of Chinese culture. Art can only continue if it keeps innovating in the process of inheritance under the multi-cultural impact currently. How to excavate the excellent traditional culture and create the engraved paper sculpture art in line with modern aesthetics is an issue worth thinking and studying. The characteristics of the painting are profound artistic conception full of interest. A. Domestic Research Status From Cai Lun's invention of papermaking technology to the continuous technical processing of paper, and then to the improvement of paper abroad, paper art has a long history. Paper sculpture originated in Han dynasty and its development has always been closely related to people's life. For example, the demand of some sacrificial occasions in ancient times produced various images of paper offerings, including figures and objects and even scenes, making paper offering gradually become a decorative art to celebrate festivals. In the old country, when the daughter got married, the mother would make a paper sculpture art work: a pattern, which is both practical and aesthetic. Since the establishment of new China to the reform and opening up, people's appreciation of art is also increasing, and engraved paper sculpture came into being, motivating the spirit of the public with its unique artistic charm. In the process of development, engraved paper sculpture has combined traditional Chinese folk arts of paper cutting, engraving art and fine brushwork. Therefore, paper sculpture art has become an indispensable part of folk art. Paper cutting is the most common paper sculpture art, which is full of the daily folk life in China, and still releases a different brilliance in contemporary society. With the improvement of people's aesthetic needs, the creation form of paper sculpture art is constantly innovating, and the art of engraved paper sculpture is the product of the development of paper sculpture. In the development process, engraved paper sculpture constantly absorbed the creative techniques of other arts to form its own unique artistic characteristics. The work "Riverside Scene at Qingming Festival' by the artist Wang Liming is an example. B. Overseas Research Status With the Western industrial revolution, the paper sculpture art gradually advanced in foreign countries in the West. The origin of paper sculpture can be traced back to the middle of the 18th century in Europe and its popularity was started by a group of artists who loved handicraft creation. The earliest paper sculptures were made from coarse paper mixed by a plant called papyrus. Later, more expensive animal skins were used in creation. Until the 20th century, similar paper substitutes were used to replace expensive animal skins. This has brought down the cost of paper sculpture. The maturity of paper sculpture technology and the popularity of paper have paved the way for its popularity in daily life. The paper sculpture work "The Origin of Heaven" by the Danish master of paper art Peter Callesen refreshes the Western paper sculpture art with its ingenious artistic conception. Asya Kozina, a Russian paper sculptor, made a stunning baroque hat in homage to the glorious era, which is a marvel. III. CLASSIFICATION OF PAPER SCULPTURE The technology of paper sculpture can be classified into four parts as follows.  Paper cutting In a broad sense, paper cutting refers to the use of scissors and cutting knives and other tools for handicraft activities of paper cutting and carving. The techniques of paper cutting include folding, Yin inscribing and Yang carving, and pricking the tapes etc.  Paper carving The technique of using a knife to create on paper is called paper carving. With the maturity of science and technology, the method of laser engraving emerges in which laser is used instead of cutting knife for paper processing. And the paper sculpture works made in this way can be put into batch production, which is suitable for mold making.  Paper rolling Also called paper quilling, paper rolling is a craft of crimping paper surface and collage according to design idea, often with the help of a pin chip for crimping. Paper quilling works are very decorative, which often uses a pin chip for paper curling, and it was originally an aristocratic handicraft spread out from the European royal family.  Paper folding The manual activity of folding paper into different shapes is called paper folding. Its material is not limited to ordinary paper. Paper folding is often used in children's enlightenment and education. On the one hand, it can improve children's practical ability, and on the other hand, it also opens the door of children's cognition of manual art, providing pleasure in learning. In terms of space, paper sculpture can mainly be classified into the following two parts:  Plane paper sculpture Literally, plane paper sculpture includes cutting, carving and other engraving activities other hollow on a piece of paper paved using scissors, cutting knife and other tools on a flat piece of paper. The common forms of paper sculpture are paper cutting and paper carving. Carving paper is a part of the paper cutting in a broad sense, carving on the paper surface with the knife as a tool. It is to depict design first on paper surface, and then engrave the paper, which also distinguishes paper carving from paper cutting in a narrow sense. Due to the perennial paper-cutting activities, papercutting artists have well-thought-out designs in mind, so most of the draft is carried out through accumulated experience. But carving paper is more inclined to preparing a draft first and then graving and then creating while innovating. With the development of culture and the maturity of technology, laser engraving machine appeared on the market, which can put patterns drawn on a computer into mechanized carving, which greatly reduces the time for handmade paper cutting. Therefore a large number of quantitative productions of engraving templates emerge. However, the patterns of laser engraving cannot be too small, which represents its difference with hand carving.  Three-dimensional paper sculpture Any form of paper sculpture protruding from the plane can be called three-dimensional paper sculpture. It has many production techniques, among which interspersing, are commonly used interpenetration, rolling, folding and multilayer stacking, etc. Three-dimensional paper sculpture is the top priority of study for modern paper sculpture artists. Embracing all sorts of themes and styles, it can either be the fresh pastoral style of the European three-dimensional paper sculpture, or the realistic style as in the work "Self" with sculpture-like modelling feeling by Li Hongjun. The commonly seen technologies in three-dimensional paper sculpture are paper rolling, flower making by pulling the paper, paper piercing and paper folding. IV. ARTISTIC ANALYSIS OF THREE-DIMENSIONAL PAPER SCULPTURE The art of paper sculpture has matured only in recent decades, and before the popularization of computer, people can only perceive the form of three-dimensional works in three-dimensional space through pictures. At the beginning, the paper sculpture designers used simple techniques and flexible artistic language, which can still resonate with the audience. Nowadays, the artistic technique is more diversified. Colors, details and textures, as well as the overall presentation of three-dimensional space can be seen in computer design in advance. Before the work comes out, people can directly peep into its artistic expression. In outstanding works of three-dimensional paper sculpture, the three-dimensional composition and plane composition cannot be separated from each other, from which the aesthetic feeling of the form and various style of the work come. Different from photographic exhibition, threedimensional paper sculpture can be directly felt by the senses, which is pleasant and interesting, as well as functional and Advances in Social Science, Education and Humanities Research, volume 368 can make full-dimensional space by breaking through the limitation of plane. The composition and basic rules of threedimensional paper sculpture are easier to be presented with the law of perspective. The viewer can discover and explore the structure of the work, enjoy and appreciate the visual images of three-dimensional the paper sculpture, which presents its spatial relations, virtual and real relations, high and low scattered relations and other characteristics orderly. V. ARTISTIC ANALYSIS OF MIANZHU NEW YEAR PAINTINGS Mianzhu New Year painting is also known as Mianzhu wood-block New Year print, getting its name for the origin of Mianzhu City. Due to the influence of Bashu culture, its color is bright, which keeps the characteristics of Sichuan and embodies the optimistic thought of Bashu people all the time. With extensive themes and various types, Mianzhu New Year paintings are a unique folk art in China. They are often posted during the New Year to express people's wishes for a better life. A. Study on the Artistic Features of Mianzhu New Year Paintings In terms of creation, Mianzhu New Year paintings inherit the style of hand-painted New Year painting before the Tang Dynasty and continue the maturing woodblock printing style of the Song dynasty. Its composition is exquisite, diverse yet unified; lines are refined and smooth, with proper density, giving a strong sense of rhythm; exaggerated, symbolic expression techniques are often used in modelling, making the paintings more vivid. In terms of color features, influenced by the traditional concept of five colours of China, Mianzhu New Year painting artists have invented a color using method with Bashu flavor in their long-term artistic creation: "black first (depict the line plate with black), white second (the base color of the hand and face and the sole of the shoe are painted white), gold third (costume and props are painted orange-yellow), fifth, sixth and all the colors of the rainbow are in the costumes (magenta, peach, yellow lead, ultramarine, reddish blue, light green, etc.)". This gives a simple and intense, bright and warm colour feeling, embodying the simplicity and wisdom of the working people. In addition, with the help of the characteristics of the similar colour, the rhythm of the picture is enhanced. And even in the strong contrast, attention is also paid to comfort and harmony. B. Problems and Improvement Measures of Mianzhu New Year Paintings First, the themes of Mianzhu New Year painting are too traditional, and unable to attract attention and interest of young people. Therefore, more researches should be made in theme selection and character modeling innovation. Second, with various types of paper sculpture including paper cutting, paper folding and paper carving, etc., ordinary aesthetes are confused about the relationship between them, which is not conducive to its category development. Therefore, the concept of "paper art" can be put forward. Third, currently there are more paper sculptures in the West, and the modeling and theme selection elements tend to cartoon and popular there. Rarely integrated into the Chinese national elements, they cannot reflect the characteristics of the modern development of Chinese paper sculpture and the art specialization is not strong. Forth, in the paper sculpture, there are more plane sculptures as the three-dimensional paper sculptures are more difficult, not convenient for non-art professionals to learn. Therefore, it is the main issue in this research how to make easy integration and transformation between plane and three-dimensional paper sculptures. A. Creation Ideas Chinese culture is extensive and profound, including the folk art with beautiful images and far-reaching significance. But due to the impact of multiculturalism, many excellent folk arts have long been lost. Therefore, efforts should be made to bring forth the fresh by creating a new artistic life with Chinese characteristics. The engraved paper sculpture is an excellent form of paper sculpture in China. Through the investigation and analysis of Mianzhu New Year paintings and Sichuan Opera, this design practice finds the possibility of their coexistence. In the square papers of Mianzhu New Year paintings, there are works about opera highlights in Sichuan Opera, which gives us the feasibility of creating three-dimensional engraved paper sculptures. Through the combination of the three, the paper sculpture works with Chinese charm can be created. This is not only the innovation of engraved paper sculpture, but also the promotion of the arts of Mianzhu New Year painting in Sichuan and Sichuan Opera. B. Design Objective Integrating folk products bearing ethnic beliefs with modern design thinking, thus giving them broader space for development, and forming rich visual images and diversified products will contribute to the promotion of threedimensional paper sculpture and Mianzhu New Year paintings. By collecting literature on the origin and artistic characteristics of Mianzhu New Year paintings and field investigation in the village of Mianzhu New Year paintings, the researcher analyzed the inheritance and development status of Mianzhu New Year paintings in this village, and explored the theme and form of expression that should be adopted in practical and applied works. Then the determined theme (plot) is applied to the work to design scenes, characters and other elements. Finally, the idea is made into three-dimensional works through the techniques of engraved paper sculpture, with lighting and wood frame combinations added, so as to achieve the ornamental effect. C. Design of the Main Modelling of the Painting This innovative practice of engraved paper sculpture is created by combining the story and stage form of Sichuan Opera and the character modeling of Mianzhu New Year painting. In the overall field, the stage plot of Sichuan Opera "Madam White Snake" is used as the blueprint for the creation, so that all forms of art can blossom in this field. The modeling features of Mianzhu New Year paintings are referred to in the engraving method of the figures, objects and the whole scene, contributing to the fluency and a match of firmness and gentleness in the work, as well as a strong sense of rhythm in the association of activity and inertia. D. Color Application Drawing on Mianzhu New Year painting color skills, the colors of the engraved paper sculpture not only retain the unique features of Mianzhu New Year painting, but also represent an artistic effect of strong contrast and bright color. This strengthens the dynamics of the story, makes the story full of vitality, and maintains harmony in the strong contrast. E. Materials For the sake of engraving and preservation, paperboard with a higher hardness and flexibility is used in this design, which is also conducive to the three-dimensional presentation of the engraved paper sculptures. In the production process, part of the carving is done with a carving knife, the auxiliary wood and glue are bonded to complete the assembly and finally, and the wood frame is used for mounting and displaying. F. Works Display The characteristic modellings of Mianzhu New Year Painting are combined with related fairy tale scenes and the modelling techniques of three-dimensional paper sculpture are used for restructuring and re-expression, so as to make a set of decorative three-dimensional paper sculpture works of Mianzhu New Year painting. This design can be used for the appearance design of tourist souvenirs or daily necessities. At the same time, the series of products combining Mianzhu New Year painting with three-dimensional paper sculpture can also be integrated into the interior decoration and display space decoration. Three themes are used as the story concepts of the designs, namely, the dramatic theme of "Madam White Snake" (see " Fig. 1"), the theme of the "The Portrait of a Lady" (see " Fig. 2"), and the theme of "the God of Door" (see " Fig. 3") and the overall effect of the work can be seen in " Fig. 4". VII. CONCLUSION The development of artistic expression is a reflection of the development of human society. Paper sculpture has been involved in human life since a long time ago, and has been constantly developing and innovating up to today. With profound cultural connotation and strong territorial complex, it is the spiritual crystallization of Chinese ancestors. In contemporary society, in order to develop and continue these excellent traditional cultures, it is necessary to constantly add new blood and bring forth the new on the basis of the old creation, so as to meet the aesthetic needs of modern people. Through this creation practice of paper sculpture art, the wonderful Mianzhu New Year painting is combined with the engraved paper sculpture, with the essence of the former integrated into the practice the latter. It is not only the innovation of engraved paper sculpture, but also the inheritance of Mianzhu New Year paintings, which enables China's excellent art creation to be presented to the public in a three-dimensional new form, showing its brilliance, so as to promote the common development of paper sculpture art and Mianzhu New Year paintings.
2019-12-05T09:11:15.805Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "b979be536d0c9a1ec071fc9f5ac309829051294f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2991/icassee-19.2019.73", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "13e50e902e0199e8e29a5d9db9bd3b996306daa7", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "Art" ] }
240754560
pes2o/s2orc
v3-fos-license
light. The light : The main objective of the paper is to measure carnal parameters, such as Blood pressure, Pulse rate, ECG (Electrocardiogram) monitoring, Temperature and retinal function (ERG-Electroretinagram) of a human subject. The individual is observed inside his own home itself. In this research the patient well being is observed by utilizing sensor and the procured information is transmitted to a microcontroller unit. The data is collected by receiver side with Bluetooth and displayed on android mobile. I. INTRODUCTION Now days, innovation has entered in all parts of everyday life. Everyone's life has become very fast in the present technological environment. People are not cautious and attention free about their health [1]. Many of the researchers have done their work related to ECG and ERG. In [3] this paper only based on ECG monitoring system. Here ECG signal is captured by using sensor and sensed value is passed to the hospital by using wireless module. In [4] this paper based on ballistocardiography (BCG), to measure ballistic force of the heart. In this paper, BCG signal is captured by using sensor and the sensed signal is passed through Data acquisition (DAQ) and compared with the reference value. The output is the comparison of two values. So here is a need to plan a perfect framework that would give us the fast and precise body condition to send the outcomes to a smart phone by means of Bluetooth. Abdurrahman Mohammad Alaql presented a novel 'Analysis and Processing of Human Electroretinagram'. In [5] is based on ERG monitoring of the patients, where the function of human eye is monitored. The controlling unit of this device is, where the PC is in charge of initiating the test by triggering a timed pulse of light. The light source is coordinated to the subject's eye and the cathodes are associated with the subject's pupil. ERG signal is passed through the microcontroller to convert from analog to digital. Then output of the ERG is displayed on LCD. Here MATLAB is used for triggering the light source. In [2] this paper author says that " the present emergency clinics are enormous with various wards arranged at better places, for example, men's wards, ladies' wards, maternity wards, general wards, uncommon rooms, and all the more critically ICU's. Specialists need to continue checking every one of the patients in these wards ceaselessly, and this requires increasingly number of medical caretakers and other concerned representatives. It isn't practical for the doctors to go to each ward and every patient as often as possible for every thirty minutes" .Keeping all these aspects in the mind, this paper has developed wireless carnal parameter monitoring in home itself. In this framework the patient's carnal parameters, for example, body temperature, circulatory strain, and heartbeat rate, ECG and ERG can be checked constantly. This is very much useful to medical application as it is compact in size and cost effective. In the event of crisis for elderly individuals who are enduring with heart infections nonstop checking of the patient [6] is required which is sometimes impractical in the clinic, or if the patient area is far away from the medical clinic .In such cases this model circuit is valuable to quantify the pulse and temperature of the individual and the data is transmitted to the restorative warning for the fundamental precautionary measures with the goal that patient can be leveled out, kept from significant circumstance before achieving the emergency clinic. II. DESIGN METHODOLOGY In this framework persistent observing of patient's various parameters, for example, body temperature, circulatory strain Pulse rate, ECG and retinal capacity are checked and showed on patient's side for patients comfort. The same information is transmitted persistently to the beneficiary side or in specialist's lodge where the information is gathered with Zigbee and Arduino and showed on smart phone. The proposed block diagram is shown in Fig.2. Block diagram of Electroretinagram module. Electroretinagram module contains of amplifier, band pass filter, Analog to digital convertor, and computer. For testing of eye functioning, Electrodes are fixed to the eye corona. Light is frequently passes to the eye vision. The electrodes will absorb the activity of eye ball pulse value. These pulse have minimum voltage level and maximum noise. So that signal is pass through the amplifier and band pass filter circuit. Amplifier circuit is used to amplify the signal and band pass filter is used to reduce the noise level of the signal. These signals are changed over into computerized by utilizing the ADC. The outcomes are appeared on PC and furthermore on LCD. LM35 is an incorporated circuit sensor used to gauge temperature with an electrical yield relative to centigrade temperature. Low yield impedance, straight yield, and exact natural adjustment of the LM35 gadget makes interfacing to readout. ECG sensor is utilized to quantify the electrical movement of the heart. That electrical action of the heart sign is simple in nature. That sign are amazingly loud. The deliberate pulse sign have least voltage level. So that signal is passed through the amplifier and band pass filter circuit. Amplifier circuit is used to amplify the signal and band pass filter is used to reduce the noise level of the signal. This is converted into analog to digital by using the microcontroller. Then result is displayed on computer and LCD. The Arduino Uno is an 8 bit microcontroller board, which has various types of controllers ICs, for example AT mega, ATMEL, etc... Arduino UNO microcontroller board is used to processing of analog to digital conversion and controlling the receiver part. The Bluetooth is one of the remote correspondence innovations utilized in a wide range of framework for remote correspondence. Bluetooth is used to transfer the data from one place to another place. A. Simulation Circuit Simulation circuit of proposed system is shown in Fig.3. This simulation circuit is made with the help of PROTEUS 8 software. This is the open source software for simulating the circuit diagram. In this software each components have their libraries. In simulation time if any components are needed, go to the pick device option in schematic capture window then search that components and pick and paste to the working path. Fig.3.Simulation diagram. From fig 3 POT is considered as the sensors. POT is a variable resistance. In this resistance value is given to the microcontroller. But proteus software only accepted the .hex file. So by using the Arduino software .into file is converted into .hex file. Then this program is coded in microcontroller. Microcontroller is used to convert the analog voltage to digital voltage and this value is display on LCD. B. Simulation Result Simulation result of proposed system as shown in Fig4. While doing the simulation process get the approximate output from the circuit. The output will be changed in accordance with the change in resistance value. The maximum and minimum resistance value of the sensor is given in Table 1. C. Hardware Implementation In this research an android based minimal effort pulse and body temperature observing framework has been actualized. The device is compact and light weight with the goal that it very well may be conveyed effectively anyplace. Different sensors are utilized to screen the pulse, circulatory strain, body temperature and convert into computerized structure. These factors are contrasted and wanted qualities put away in the processor and showed on the LCD show and send to android versatile through Bluetooth. Equipment execution of proposed framework is appeared in figure 5. Here Pulse sensor and LM35 sensor detects the beat, pulse, circulatory strain and temperature. Electrodes are fixed to the hand and eye corona to sense the ECG and retinal function. These sensed values are transmitted to the amplifier circuit through the microcontroller unit. Crystal oscillator generates 11.0952MHz of signals used to operate microcontroller. The data of the sensor circuit convert to ADC and displays the values on LCD and also on android mobile via Bluetooth. The following results are obtained while testing: figure 6 shows the display of heart beat, body temperature. At the initial stage, the sensors reads zero, when not in contact with the human body except temperature, as the temperature sensor reads room temperature. Hardware kit is interfaced to computer with the help of RS232 cable and Output of ECG, ERG, pulse rate and temperature values are viewed in the computer by using Lab VIEW software as shown in Fig. 9. Fig.9.Digital output using Lab VIEW. The patient can monitor As a result for those patients for whom it is must to monitor our body condition continuously using android mobile phone. IV. CONCLUSION In the proposed system the prototype contain ATmega328 Microcontroller, Bluetooth module, LCD, individual sensors and other equipment circuit to send the message to the corresponding observer mobile phone for playing it safe to take care about the patient in a given fixed time interim. The auto alert office in this frameworks works for the strange conditions, when the perusing of the indispensable signs surpasses from fixed level. The gadget has enough extent of progress in further research. In future , enhancement of this work can be done by adding automatic administration of medicines to the patients during the critical conditions and to track the location of the people by using GPS tracking system. In Automatic Injection System, when the level of heart beat goes high or too low, this system will inject the medicine into body of the patient automatically. GPS Tracker is used to track the patient at any period of time. Added to this features, the patient also gets advice from the pre stored voice recorder.
2019-09-17T02:48:05.982Z
2019-08-30T00:00:00.000
{ "year": 2019, "sha1": "d84f7315a9972ce12916becaa9c11ce669001f89", "oa_license": null, "oa_url": "https://doi.org/10.35940/ijeat.f8465.088619", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ace567a31f49931762ab1645e2f61a0bbae7d295", "s2fieldsofstudy": [ "Engineering", "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
269838688
pes2o/s2orc
v3-fos-license
Understanding the role of early life stress and schizophrenia on anxiety and depressive like outcomes: An experimental study Background: Adverse experiences due to early life stress (ELS) or parental psychopathology such as schizophrenia (SZ) have a significant implication on individual susceptibility to psychiatric disorders in the future. However, it is not fully understood how ELS affects social-associated behaviors as well as the developing prefrontal cortex (PFC). Objective: The aim of this study was to investigate the impact of ELS and ketamine induced schizophrenia like symptoms (KSZ) on anhedonia, social behavior and anxiety-like behavior. Methods: Male and female Sprague-Dawley rat pups were allocated randomly into eight experimental groups, namely control, gestational stress (GS), GS + KSZ, maternal separation (MS), MS + KSZ pups, KSZ parents, KSZ parents and Pups and KSZ pups only. ELS was induced by subjecting the pups to GS and MS, while schizophrenia like symptoms was induced through subcutaneous administration of ketamine. Behavioral assessment included sucrose preference test (SPT) and elevated plus maze (EPM), followed by dopamine testing and analysis of astrocyte density. Statistical analysis involved ANOVA and post hoc Tukey tests, revealing significant group differences and yielding insights into behavioral and neurodevelopmental impacts. Results: GS, MS, and KSZ (dams) significantly reduced hedonic response and increased anxiety-like responses (p < 0.05). Notably, the presence of normal parental mental health demonstrated a reversal of the observed decline in Glial Fibrillary Acidic Protein-positive astrocytes (GFAP + astrocytes) (p < 0.05) and a reduction in anxiety levels, implying its potential protective influence on depressive-like symptoms and PFC astrocyte functionality. Conclusion: The present study provides empirical evidence supporting the hypothesis that exposure to ELS and KSZ on dams have a significant impact on the on development of anxiety and depressive like symptoms in Sprague Dawley rats, while positive parenting has a reversal effect. Introduction Early life stress(ELS) entails a range of detrimental encounters in children such as abuse, neglect, and maltreatment that have profound effects on the brain and mental development, and constitute major risk factors for brain structural changes and adult psychopathology [1,71].It is widely acknowledged that ELS can exert profound and enduring impacts on an individual's emotional and cognitive well-being later in life.For a long time, the precise neural underpinnings of the outcomes stemming from ELS remained inadequately comprehended.However through translational research in rodents [66,82], it is postulated that distressing occurrences may induce alterations in the structure of brain regions involved in regulating emotional responses in humans [46,63]. In light of this, several studies have shown that rats exposed to MS or GS or chronic unpredictable stress (CUS) exhibit diminished astrocytic volume in the medial prefrontal cortex [38,72], reduced size of the hippocampus [74,86], heightened activity of the Hypothalamic-Pituitary-Adrenal axis [27,62,69], and concomitant impairment of sensory and cognitive functions [18].In a manner akin to the impact of ELS, various parental effects in human such as psychopathology, socioeconomic, and social factors possess the capacity to exert substantial influence on the neurodevelopmental trajectory of children later in life as well [35,4].In addition, numerous research in humans have established a correlation between parental psychopathology such as schizophrenia(SZ) and a heightened susceptibility to the development of internalizing and externalizing problems in children [28,56,77] which is seen through disruptive behavior.Schizophrenia stands as a complex neuropsychiatric condition that has a global prevalence of approximately 1% [11,55], moreover its marked by a spectrum of symptoms, encompassing delusions, hallucinations, disorganized speech or conduct, and impaired cognitive function [94,25].Empirical evidence substantiates the notion that genetic components exert a substantial influence on the development of SZ.Environmental and social factors also potentially contribute to SZ onset, particularly among individuals susceptible to the disorder [31,81].Nevertheless, the prevailing body of research predominantly focuses on the isolated examination of these variables and their distinct effects on child psychopathology, behavior, and cognition [75,84].Building upon this foundation, the current study employs the utilization of GS, MS, and KSZ in rats, exploring their interactive impact on the development of psychiatric illness later in life. Neurochemical processes intricately govern the complex interplay between emotional and cognitive behaviors within the brain.These processes involve a complex network of neurotransmitters, receptors, and signaling pathways that collectively contribute to the regulation of mood, cognition, and other higher-order functions.Among the various cellular actors involved in these processes, astrocytes have emerged as key players in modulating synaptic activity and maintaining neural homeostasis [45,47].Astrocytes assume a pivotal role in normal brain development, hence comprehending the temporal dynamics of their maturation is imperative to understanding how Early Life Adversity may influence their lifelong function behaviour ([5]; Yaxing [15,24,76]).Astrocytic development initiates in the concluding stages of embryonic development and experiences a rapid surge in numbers within the first month after birth.In the hippocampus, the peak generation of astrocytes occurs during the second postnatal week [67,9].In humans astrogenesis is believed to initiate during the final stages of gestation and extends throughout the postnatal period, resulting in a continuous rise in the number of GFAP+ cells across the central nervous system [57,59,61,70,79].This developmental timeline aligns with early sensitive periods, rendering astrocytes particularly susceptible to the effects of ELS in both humans and rodents.Consequently, it becomes crucial to explore how ELS impacts astrocytes and their potential contribution to ELS-associated brain dysfunction. In the realm of human health, conditions like depression and anxiety frequently coexist, with impaired social functioning often preceding the onset of these disorders.As a result, the investigation of disrupted social behaviors holds significance in unraveling the intricacies of stress response development.The current investigation endeavors to bridge this existing knowledge gap by undertaking an empirical inquiry to explore the collective influence of ELS and SZ on the development of anxiety and depressive like symptoms in rats.The rationale for combining ELS and PSZ in this rat model study is to simulate the multifaceted nature of human experiences. Experimental animals and treatment Sprague Dawley rat strain has been widely recognized for its susceptibility to a range of health ailments, such as stress, cancer, diabetes, and cardiovascular disorders, which closely mirror human conditions (Brower et al., 2015).In alignment with this motivation, a total of twenty-four pure breed of Sprague Dawley rats, comprising of sixteen healthy nulliparous females in the fertile phase and eight proven stud male rats, were procured from the Biomedical Resource Unit (BRU) breeding centre at the University of KwaZulu-Natal(UKZN).The animals were bred and raised within the controlled environment of the Animal House, located at the UKZN's BRU.For this study, a pure breed of the Sprague Dawley strain was used to ensure consistency and uniformity in the experimental cohort.All animals were subjected to standard laboratory conditions, including being housed in pairs within cages measuring 29 cm by 22 cm by 14 cm.The ambient temperature was maintained at a constant level, and a 12-hour dark photoperiod was implemented.The rats were provided with unrestricted access to both food and water ad libitum.All animals included in this study were subjected to humane care in accordance with the guidelines established by the Institutional Animal Use and Care Committee of the School of Laboratory Medicine and Medical Science at the UKZN.The present study adhered to ethical guidelines and obtained prior approval before commencing the research.The study protocol was assigned the reference number AREC/00003119/2021. Animal housing and surgery To synchronize the date of delivery and to eliminate any experimental bias, the study involved mating a total of 16 healthy nulliparous females in the fertile phase and 8 proven stud male rats to obtain 64 pups for the experiments.Once the animals were collected from the BRU, the NF phase rats were paired in groups of four in cages for one week to acclimatize and minimize stress, and mainly to synchronize their oestrous cycles.The SM were individually housed for one week prior to the mating to build up sperm count and maximize their fertility.The mating of the females and the males in the experiment was staggered a few days apart.Vaginal smears were taken to assess their oestrous cycle, and when they were at proestrus, two nulliparous females were transferred into a stud male cage.In the subsequent morning, we conducted a comprehensive examination to ascertain the presence or absence of an ejaculatory plug within the vaginal cavity, as outlined in the study conducted by (Paccola et al., 2013).Upon the discovery of a plug, the female rat was promptly confined within a cage with a controlled environment labeled with a designated breeding date.This designation was made with a tentative assumption of the female's pregnancy status, denoted as day one.The males were removed from the cages and Euthanized.The study consisted of eight distinct groups, each comprising of eight pups (n=8).On the day of delivery 4 pups were randomly selected from each dam and labeled accordingly for proper follow up.The distribution of these groups and the corresponding Table 1 The tables outline different groups and the specific stressors the animals were exposed to. 1.The table provides a precise distribution of animal groups and the specific stressors to which they were subjected to see Fig. 1.Dams that delivered more than the required number were of pups euthanized after weaning on Post-natal day 21. Bedding change and urine collection Regular cleaning and changing of bedding was done weekly from old beddings to new beddings, however on the 3-week interval from week 3-15, SD rats were transferred in their respective groups to metabolic cages for urine collection within their experimental rooms during the changing of beddings for a period of 30 minutes.Urine samples were immediately placed in clean, labeled containers for subsequent analysis. Groupings and stressors Rats were randomly assigned to 8 groups: control (non-stressed pups), group 1 (GS + ketamine injected dams), group 2 (GS), group 4 (MS), group 5 (MS + ketamine injected pups, group 6(ketamine injected parent), group 7 (ketamine parent + ketamine Pups) and group 8 [ketamine positive control).The dams and their offspring were grouped per Table 1 conditions.We utilized ketamine, an N-methyl-D-aspartate (NMDA) receptor antagonist, which mimics cognitive, positive, and negative symptoms of SZ [42,64].Dams received (30 mg/kg, ip) daily basis for five consecutive days while the control group received saline (0.5 ml/kg, i.p) [10], and pups were injected with ketamine (16 mg/kg, subcutaneously) three times a week (on Mondays, Wednesdays, and Fridays).The animals were weighed on a daily basis and the does were calculated per body weight for both the dams and pups.This treatment protocol was initiated on postnatal day 1 and continued until postnatal day 14.The selection of dosage, route of administration, and injection schedule were determined through a comprehensive analysis of prior scientific investigations that have reported the induction of psychotic-like alterations subsequent to a 5-day regimen of ketamine treatment [43,49,78].On postnatal day 21 the pups were weaned while the maternal subjects were subjected to euthanasia.The animals were left undisturbed until postnatal day 60, at which point they were subjected to behavioral testing see Fig. 1. a) Gestational Stress (Maternal restrainer stress) To simulate human prenatal stress, pregnant dams were subjected to daily restraint stress induction from days 15-18.The stress protocol involved confining the pregnant dams in transparent cylindrical restrainers for 45 minutes in a well-illuminated environment with temperature maintenance between 21 and 22 • C, with adjustments in the restrainer dimensions based on the rats' body mass throughout the approximately 21-23-day gestational period.b) Maternal and pups Psychopathology (Parental Schizophrenia) Dams and pups were injected with ketamine to simulate schizophrenia like symptoms (KSZ).Behavioral testing was conducted on the offspring at postnatal day 60, following a protocol in line with previous studies [43,49,78]. c) Maternal Separation The maternal separation protocol involved daily 3-hour separations from 9 a.m. to 12 p.m. Dams and pups were placed in separate areas to prevent interaction.After the separation, pups were returned to the room, and dams went back to their cage.The light/dark cycle was from 6 a.m. to 6 p.m., with weekly cage cleaning.Weaning occurred at postnatal day 21, and rats were left undisturbed until postnatal day 48 (Begni et al., 2020).Fig. 2 visually depicts this separation process.As shown in Table 1 below. Behavioural tests a) Sucrose preference test Anhedonia, refers to the inability to derive pleasure from F.O. Oginga and T. Mpofana enjoyable activities, is a fundamental symptom of depression observed in humans, rodents can also present with hedonic like response [54,80].Rats were provided with two bottles in their home cages: one containing 5% sucrose solution and the other containing regular water.This setup was initiated on Gestational day 22 for prenatal stress and on Postnatal days 56-59 to test the effects of ELS and KSZ, with a duration of 48 hours [50].On the third day, a preference test was conducted by replacing one bottle with plain water while the other bottle still contained the sucrose solution.Prior to the test, the rats' weights were measured, food was provided, and they were left undisturbed in their cages for 24 hours.Afterward, their weights were measured again before the test commenced.Following a 24-hour testing period, the bottles were removed and weighed to determine the sucrose intake.This preference test was conducted over a period of three days [50].The preference for sucrose was calculated using the formula: Preference = (sucrose intake/total intake) × 100.This protocol was adapted from [54] and [22]. b) Elevated plus maze In the current study, the researchers employed the EPM as a behavioral test to assess anxiety-like behavior in SD.The maze consisted of a plus-shaped platform elevated above the ground with two open arms and two enclosed arms as shown in Fig. 3 [91].Rats naturally exhibit an aversion to open and elevated spaces, making the EPM an effective tool for measuring anxiety-related responses.The rat was placed at the center of the maze facing one of the enclosed arms and allowed to freely explore the maze for 5 minutes.We recorded and analyzed the behavior of the rats during the test. During Sacrifice and neurochemical analysis a) Animal sacrifice and tissue collection Prior to the procedure, animals were anesthetized using a combination of ketamine and xylazine or Halothane, inducing a state of deep unconsciousness to minimize any potential distress or pain.Once the desired level of anesthesia was confirmed, Transcardial perfusion was performed or Decapitation procedure as described below.Animal sacrifice was performed by trained personnel experienced in the procedure, maintaining a focus on precision and compassion.Detailed records of each sacrificing event, including animal identification, date, time, and personnel involved, were documented for regulatory compliance and transparency.The date of sharpening of the guillotine was not more 3 weeks from the Decapitation date.Throughout the entire process, the welfare of the animals was paramount, and every effort was made to minimize any potential distress.This approach underscores our commitment to conducting research with the highest standards of ethical integrity and animal welfare. b) Transcardial Perfusion Fixation A half of the SD rats underwent TPF, they were anesthetized with ketamine and xylazine and then transferred to the operation station.There, they underwent perfusion with phosphate-buffered saline (PBS) and 4% paraformaldehyde (PFA).To perform the perfusion, a small incision was made at the posterior end of the left ventricle, and a perfusion needle was inserted into the ascending aorta through the incision.The sternum was clamped and lifted away, and any tissue connecting it to the heart was removed.This process provided a clear view of the major vessels as the thymus was lifted away with the sternum.Additional anesthesia was administered as needed to maintain anesthesia throughout the procedure.The guidelines provided by Gage et al. [29] were followed during the procedure.Transcardial perfusion is important because it allows for the preservation of brain tissue in a stable and reproducible state, facilitating detailed anatomical, molecular, and cellular analyses.Moreover, it ensures that the brain tissue is free from blood, which could interfere with imaging and staining procedures [29]. c) Corticosterone and Dopamine Levels Quantification Corticosterone and dopamine levels play crucial roles in mediating the physiological and behavioral responses to stress in animal models.Research has consistently shown correlations between alterations in these neurochemicals and various behavioral outcomes associated with stress-related disorders, including anxiety and depression-like behaviors [39,88,93]. In studies utilizing animal models of early life stress, measuring corticosterone and dopamine levels alongside behavioral assessments provides a comprehensive understanding of the neurobiological mechanisms underlying stress-induced behavioral phenotypes.Elevated levels of corticosterone, reflecting hyperactivity of the hypothalamicpituitary-adrenal (HPA) axis, have been linked to increased anxietylike behaviors and depressive symptoms [60].Conversely, alterations in dopamine signaling, particularly within mesolimbic and meso-cortical pathways, are associated with changes in reward processing, motivation, and mood regulation, all of which contribute to stress-related behavioral dysfunctions [8,89].Corticosterone, and dopamine concentrations were measured using ELISA kits.Specifically, we utilized the corticosterone ELISA kit (E-EL-R0269), and dopamine kit (E-EL-R0343) to assess the levels of these hormones in both blood samples and Urine.The ELISA kits provided a quantitative measurement of the concentration of corticosterone and dopamine based on specific antibody-antigen interactions.These measurements allowed us to examine the levels of these hormones in relation to this experimental conditions.After the behavioral protocol, animals were anesthetized with ketamine and xylazine overdose and perfused with phosphate buffer and 4% paraformaldehyde fixative (Oginga and Mpofana., 2023).The brain sections, containing the orbi-frontal cortex and dorsal hippocampus, were obtained with a vibratome and stored for analysis.ii.Cresyl Violet Staining and Volume Estimation: In this study, Cresyl violet was used as a staining technique to effectively visualize and demarcate the boundaries of the hippocampus within the examined sections.An anatomical microscope with a magnification of 1.25× was used to determine the number of grid points within the hippocampus, as depicted in Fig. 2. Hippocampus was divided into CA1, CA2/3(CA2 and CA3) and DG for easy demarcation. The estimation of the total volume of the hippocampus was conducted using the Cavalieri's principle, as described by [34,90].In this principle, the volume (V) of either the hippocampus or its sub regions was determined.The thickness (t) of the tissue block used for analysis was set at 0.6 mm.Each grid point was associated with an area (a(p)) of 0.09 mm2.The total number of grid points (ΣP) present in the hippocampus per rat was calculated. iii. Immuno-Histochemical Labeling The tissue slices were subjected to a sequential washing protocol, which comprised three consecutive 10-minute washes in a 0.1 M PB solution supplemented with 0.5% Triton X-100 (TX).The washing process was followed by a blocking step (3 hours) using PB solution, which consisted of 2% normal goat serum (NGS), 3% bovine serum albumin, and 0.3% Triton X-100 (TX).Tissue was then incubated for 48 hours at 4 • C.During this incubation, a mouse monoclonal antibody (MAB360, Millipore) specifically designed to target GFAP was utilized.The antibody was diluted to a ratio of 1:10,000 in a working buffer solution composed of PB containing 2% NGS (normal goat serum) and 0.3% TX (Triton X).Following the incubation period, the sections were subjected to a series of three 10-minute washes in the working buffer, then transferred to a working buffer maintained at a temperature of 25 • C. The working buffer contained a 1:500 dilution of goat anti-mouse IgG conjugated to Alexa Fluor 633 (Inqaba Biotechnical SA).After a 3-hour incubation period, the sections were subjected to a subsequent 15-minute incubation step at 25 • C in a working buffer solution that was prepared by diluting 4′,6-diamidino-2-phenylindole (Sigma) at a ratio of 1:2000.The sections were washed and then carefully mounted in Prolong Gold antifade reagent. Astrocytes and principal neurons were identified using a confocal microscope, which relies on the distinct characteristics of their soma in terms of size and shape.Immunofluorescence labeling was used to detect the presence of the astroglial marker S100b.To achieve this, free-floating sections were subjected to an overnight incubation with a rabbit anti-S100b antibody (Switzerland), at a dilution of 1:400.Tissue was then incubated for 1 hour with 20 mg/ml carbocyanine (Cy) 2-conjugated donkey anti-rabbit IgG (Dianova).To inhibit the unoccupied binding sites of Cy2-anti-rabbit IgG, the sections were incubated for 3-hour with 50% rabbit antiserum.The sections were subsequently subjected to an overnight incubation with 10 mg/ml rabbit anti-GFAP antibodies.The sections were subjected to a 1hour incubation period with Cy3-anti-digoxin (20 mg/ml; Dianova) to detect the presence of GFAP immune-reactivity. After stereological analysis, further quantification of GFAP+ astrocytes were performed using ImageJ software.Highresolution confocal images of tissue sections were captured, and regions of interest containing GFAP-labeled cells were selected for analysis.Using ImageJ, individual astrocytes were identified and counted based on their specific fluorescence signal intensity and morphology.The number of GFAP+ cells within each region of interest was quantified using automated cell counting algorithms available in the software, ensuring accurate and reproducible measurements [68]. iv. Astrocytes Density and Processes Counting High-quality digital images of the stained tissue sections are acquired using a microscope equipped with a camera.The acquired images are imported into ImageJ, an open-source image analysis software.Using ImageJ, astrocytes stained positive for the GFAP marker are identified and counted manually or using automated image analysis algorithms.High-resolution images capturing astrocyte processes within the hippocampus were processed to enhance contrast and highlight the intricate nature of these structures.The thresholding technique was applied to distinguish astrocyte processes from the background, and subsequent skeletonization reduced them to a one-pixel width for improved visibility.The "Analyze Skeleton" tool in ImageJ was employed to conduct an intersection analysis, providing quantitative data on the number of intersections within the astrocyte processes.Parameters such as "Prune Cycle" were adjusted for optimal identification.Manual validation of the results was performed to ensure accuracy, and the outcomes were documented, including the average number of intersections per astrocyte or per specified area.This rigorous approach aimed to provide a quantitative assessment of astrocyte process intersections, contributing to a comprehensive understanding of their distribution and morphology within the studied hippocampal region. Data analysis and statistical measures All statistical analysis was performed in Paleontological statistics software Package (PAST-4) and the significance level was set at p < 0.05.Results were expressed as mean ± SEM.To address the issue of multiple comparisons and to control the familywise error rate, the adjusted significance level was set at p < 0.0018.Comparisons within the groups on behavioural test and astrocytes were done by one-way ANOVA.Power analysis was conducted to test for the effect size in results.Furthermore, after the ANOVA result if the results indicated a significant overall effect we performed a Post hoc Tukey's test to further analyze and compare the means of different groups.T-tests are used to compare the means of two independent groups or the mean difference between two related groups.Image-J was used to quantify hippocampal astrocytes density (HAD) and the processes intersections.The data on the effects on cognition, memory, and motor were recorded and plotted in bar graphs, box plots and radar plots. Early life stress amplifies Schizophrenia's grip on anhedonia in SD rats Anhedonia, a symptom associated with depression, can occur at various stages of the depressive episode [2].It is characterized by a diminished capacity to experience pleasure or a loss of interest in activities that were previously enjoyable [87].To test for anhedonia, we measured the percentage sucrose preference.The analysis of variance between the Control and the treatment groups (group1, group 2, group 4, group 5, group 6, group 7 & group 8) revealed a significant difference on SPT (p < 0.001), indicating an increase in anhedonia.Additionally, a student t-test comparison between group 8 (normal dams with KSZ pups) and group 7 (KSZ dams and pups) revealed a significant difference (p = 0.01) on SPT see Fig. 3. Early life stress has a lasting effect of anxiety like response later in life using Elevated plus maze (EPM) a) Number of entries to the Open Arm and Closed Arm In the present study the researchers used the EPM to measure anxiety-like behavior in SD rats.EPM takes advantage of the conflict between the rodent's innate fear of open spaces and its motivation to explore novel environments [44,91].The statistical analysis revealed no significant difference between the control and the rest of the groups 1, 2, 4, 5, 6, 7 and 8 (p > 0.05) on the number of entries to the closed arm.However, group 6 (ketamine injected parent) demonstrated a significantly higher number of entries to the closed arm compared to group 4 (Maternally separated pups) (p=0.009) and group 5 (Maternal separated + ketamine injected pups) (p=0.03)see Fig. 4(A). b) Time Spent on the Closed Arm and Open Arm For us to have a comprehensive assessment of anxiety-like behavior that results from the impact of ELS, we further analyzed the time spent on both arms.In this present study we started by analyzing data on the time spent on the closed arm, the groups exposed to ELS and SZ had a significant difference compared to the control (p=0.05), group 5 (Maternal separated + ketamine injected pups) spent the highest time on the closed arm compared to the rest.Additionally, the post hoc analysis revealed that, groups 1,4,5,6,7 and 8 exhibited significant differences compared to the control group see Fig. 5(A). Early life stress & parental psychopathology induced prefrontal astrocytes volume loss We examined prefrontal astrocytes density as a potential correlate of the observed behavioral changes.Astrocytes, a type of glial cell in the brain, play a crucial role in the functioning of the prefrontal cortex (PFC) and have been implicated in anxiety and depression [51].In addition, these astrocytes have processes and intersections that are essential for maintaining the structural and functional integrity of this brain region see Fig. 6(a, b & c).The analysis of variance (ANOVA) revealed significant differences between the control group and each of the experimental groups (MS, GS and KSZ) (p < 0.001) moreover with a larger effect size (ω^2=0.707).In addition, a paired student T -test showed that there was also a significant difference between group 7(ketamine parent + ketamine Pups) and group 8(positive contro) (p = 0.011) see Fig. 6 (D). Early life stress effect on plasma dopamine level The specific effects of early life stress on plasma dopamine levels can vary depending on the nature, duration, and severity of the stressor, as well as the developmental stage at which the stress is experienced.The results of the pairwise comparisons yielded statistically significant differences among various experimental groups (p < 0.05).Specifically, significant differences were observed between the Control vs group 1 (p < 0.001), Control vs group 2(p = 0.057), Control and group 5(p < 0.001), Control vs group 7(p < 0.001), and Control vs group 8(p < 0.001).Additionally, significant distinctions were evident between group 1(Prenatal maternal stress + ketamine injected pups) and group 2 (Prenatal maternal stress) (p = 0.003), as well as the group 4 vs group 7 (p = 0.007).Notably, no significant differences were observed between the group 4 vs group 8 (p = 0.191) see Fig. 8. Discussion This study demonstrates that differential ELS and KSZ in dams caused increased anxiety-like behavior in SD rats.These stressors also induced Astrocyte volume reduction in the PFC of SD rats.As for the physiological and neurochemical changes that occur in response to ELS and KSZ in rats, we quantified urine corticosterone from week 3-15, on three-week interval and plasma Dopamine.For some time now, MS, GS and social isolation in rats have been employed to mimic early life experiences in humans such as childhood maltreatment or trauma.Several previous studies have reported that MS, GS, and social isolation during early life in rats caused reduced social interaction and anhedonic like behavior [48,65].Nevertheless, there remains a dearth of comprehensive exploration into the distinct impacts of MS, GS, and KSZ on social behavior, anhedonia, and anxiety like responses in rat model.Even so to our knowledge no study has postulated the impact of KSZ as reported here on mimic parental psychopathology in humans.Here this study showed that differential stressors such as MS, GS and KSZ, increased anxiety-like (anhedonia) behaviors in ELS SD rats (Figs. 4, 5 and 6). To be specific there was a significant difference in means among the groups, indicating that ELS amplified SZ's grip on anhedonia in SD rats.Moreover, with a large effect size emphasizing the substantial impact of ELS and KSZ on anhedonia.The present study found a significant increase in anhedonia as depicted by the decreased sucrose preference in the rats exposed to stress (MS, GS, ELS and PSZ) compared to the Control (p = 0.001).MS and GS have previously underscored the amplifying impact of ELS on schizophrenia-related anhedonia in later stages of life, as demonstrated in the study by Cui and colleagues [17].Furthermore, these methods have been effectively employed in creating depression models, with rodents exhibiting heightened anxiety-like behaviors as a result.Du Preez et al., [21].Interestingly in the present study, we also observed a found a significant decrease in anhedonia as depicted by the increased sucrose preference in the pups of Group 8 (consisting of normal parents and SZ pups) compared to the Group 7 (consisting of schizophrenia parents and pups).This finding suggests that normal parenting can potentially reduce the magnitude of depressive like outcomes in later life.These results are particularly noteworthy as they contribute novel data in the context of experimental animal studies, as previous researchers have not modelled PSZ paradigm and therefore not reported such findings.Nevertheless several human studies have reported a positive correlation between positive parenting and children externalizing and internalizing behaviours (Ying [13]) compared to the contrary in PSZ [37].Moreover, a study by Dong and colleagues examined the impact of positive parenting on children's emotional intelligence and found that higher levels of positive parenting were associated with greater emotional awareness and regulation skills in children [20].In addition, Tarver and colleague's autism metaanalysis reported that negative parenting behaviors resulted in a positive correlation with externalizing behaviors, while positive parenting behaviors resulted in a positive correlation with social skills and a weak positive correlation with internalizing behaviors [85]. EPM is the gold standard in measuring anxiety out comes in rodent model, this is because of its good face validity and its test re-test reliability [26].As earlier mentioned previous studies have shown that ELS such as MS, GS and social interaction increases anxiety later in life [36]. Here this study showed that differential ELS or KSZ have the propensity of increasing anxiety later in life through the number of entries and time spent on both closed and open arm.For instance, on the number of entries into closed arms the results demonstrated a significant effect variation in anxiety levels among the different groups.However, on the number of entries into the open arm, the control group and prenatal stress group exhibited significantly higher time spent on the open arm compared to the other groups with a substantial effect size.Further analysis indicated significant differences in means between the groups, particularly for the time spent on the closed arm. Several studies have reported similar effects, [6] found that brief MS was associated impairment with social behaviour and anxiety in mice.Furthermore, maternal separation had deleterious effects on rodents behavior and demonstrated significant sex-specific effects on social behavior [41].These consistent results across studies indicate the detrimental impact of MS on anxiety-related behaviors.In addition, a human study by Elysayed and colleagues [23], investigated the role of familial risk, parental psychopathology, and stress in the first onset of depression during adolescence.The findings indicated that adolescents with a high familial risk for depression were eight times more likely to develop first-onset depression compared to those with low familial risk.Their results showed that maternal behavioral disorders and increased recent life stress directly predicted the initial onset of Major Depressive Disorder (MDD) in high-risk adolescents. Conjointly, sociality and anxiety are intricately linked and regulated by neural circuits centered around the PFC and Hippocampus [12,40].These two areas of the brain have been associated with control of social behaviors and anxiety [12,14].Furthermore, these brain regions are recognized as being negatively affected by trauma, neglect and child maltreatment [7].In the PFC and Hippocampus of children exposed to ELS and PSZ several prior studies have reported reduced gray matter volume and alterations in neural connectivity [33]. We found a significant decrease in PFC astrocyte density in stressed rats compared to the control, non-stressed rat.This result suggests the possibility that ELS and PSZ may cause inflammation and result in neuronal cell death in the brain.To support this idea, previous studies have reported neuronal cell death due to chronic stress [52,58].Moreover altered astrocyte density in the PFC has been implicated in the pathophysiology of various psychiatric disorders, including SZ [19,3,73].In addition, the regenerative astrocytes gliosis potential could explain the difference experienced in reduced anhedonia and anxiety between group 8 (Normal parents and schizophrenia pups) and group 7 (Schizophrenia parents and pups), as supported by several prior studies [16,30]. In addition, the neuronal underpinnings, we quantified the urine corticosterone levels on a three-week interval from week 3-15 (see Fig. 7).The results demonstrated a significant overall difference between the means of groups, emphasizing the impact of ELS and PSZ influence on corticosterone levels (see Figs. 7 and 8) with a high effect size.These findings underscored the substantial impact of ELS and PSZ on the regulation of corticosterone release.All treatment groups showed significant differences when compared to the control group, indicating the disruptive effect of early life stress and parental psychopathology on the stress response system.Also, we observed a significant difference between Group 7 (SZ parents and pups) and Group 8 (normal parents and schizophrenic pups).This finding suggests that good parenting practices had a reversing effect on the stress response system.Chronic stress can lead to dysregulation of the HPA axis, resulting in increased secretion of corticosterone, a stress hormone.Prolonged activation of the HPA axis can contribute to various physiological and psychological effects associated with chronic stress as seen in Fig. 8.More stress however can further lead into more corticosterone affecting the PFC astrocytes density in the brain. Dopamine is often associated with the brain's reward system, contributing to feelings of pleasure and reinforcement.It plays a role in motivation and the anticipation of rewards.Several studies suggested that stress induces alterations in dopamine signaling that may potentially contribute to behavioral changes associated with chronic stress.In the current study, we observed that differential ELS and individuals with KSZ exhibited a significant reduction in plasma dopamine levels compared to the control group and other groups, except for the GS group (p < 0.05).However, with not much significance there was a slight difference between group 8 vs group 7. Generally, this signified that the above stressors have a potential role in decrease in anhedonia and increased anxiety as early reported by the SPT and EPM.This study findings are in line with prior studies by [53,83] that reported that ELS and PSZ causes anxiety through changes in the mesolimbic dopamine reward functions, hype responsiveness of HPA axis stress response, and other stress-and reward-related pathways that affects dopamine levels.In summary depression and anxiety are usually the first key symptoms in development of mental illness.In the context of this study, it's important to acknowledge the limitations associated with using a rat model to explore the relationship between KSZ and later-life anxiety and depression in rodents.While this approach offers valuable insights, several considerations warrant attention.To begin with, the rat model might simplify the complexity of human SZ, potentially limiting its relevance to understanding anxiety and depression.Secondly, while rats do exhibit behaviors associated with anxiety and depression, these might not precisely mirror human emotional experiences. In conclusion, this study results demonstrate that ELS and KSZ impacts adolescent social behaviors, anxiety-like behaviors, and PFC characteristics in SD rats.This study suggests that ELS causes the alterations in the astrocytes in PFC, leading to social impairments and increased anxiety-like behaviors later in life.Similarly, the interplay of these factors (ELS and PSZ) underscores the multifaceted nature of the disorder experienced in humans.Interestingly, positive parenting has a potential benefit of reducing anxiety and depression later in life.Additional research employing molecular and cellular techniques, such as gene expression analysis or neurotransmitter receptor profiling on the PFC, hippocampus and basal ganglia, may provide valuable insights into the underlying neurobiological alterations. Role of the financial funder The funder will support the collection of data by the original investigators, data management, and data analysis.The funder is not involved in the design of the projects, protocol and the analysis plan.The funder will have no input on the interpretation or publication of the study results. CRediT authorship contribution statement Thabisile Mpofana: Writingreview & editing, Writingoriginal draft, Visualization, Supervision, Methodology, Investigation, Funding acquisition, Conceptualization.Fredrick Otieno Oginga: Writingreview & editing, Writingoriginal draft, Visualization, Methodology, Investigation, Conceptualization.literature searches.FOO and Thabisile Mpofana (TM) collaborated on the initial draft of the paper, providing substantial input in the form of text passages and revisions for significant intellectual content.All authors have participated in, reviewed, and granted approval of the final version of the manuscript. Fig. 2 . Fig. 2. Illustration depicting the stereological method employed for hippocampal volume estimation.Grid points were counted across the entire hippocampus, Scale bar = 1 mm. the test, we measured parameters such as the time spent in the open arms, number of entries into the open arms, and the latency to enter the open arms as indicators of anxiety-like behavior.Anxiety like behaviour was typically characterized by a reduced exploration of the open arms and an increased preference for the closed arms.Conversely, decreased anxiety was reflected in the increased level of exploration in the open arms.The EPM has been extensively validated as a reliable and sensitive test for assessing anxiety-like behavior in rodents in previous studies [91,92]. Fig. 3 . Fig. 3. : Percentage Sucrose Preference Test in Different Treatment Groups and Control.The figure presents the results of the percentage sucrose preference test in seven treatment groups compared to the control group.The sucrose preference test was conducted to assess the hedonic like response and reward sensitivity in the different experimental conditions.Statistical significance was determined using one-way ANOVA, with Bonferroni correction.(*: P ˂0.05; **: P ˂0.01). F .O. Oginga and T. Mpofana 2.7.Astrocyte cell cultures i. Tissue preparation: Finally, on the time spent in the open arm, the post hoc analysis revealed that both the Control (M = 96.3,SD = 4.6) and Group 2 (M = 78.8,SD = 5.4) demonstrated significantly higher time spent on the open arm compared to the other Groups (p < 0.05), in addition group 8 (positive control) was slightly significantly different from group 7 (ketamine parent + ketamine Pups) (P=0.04)see Fig. 5(B). Fig. 4 . Fig. 4. : Number of Entries into the Open and Closed Arms on the Elevated Plus Maze.The results of the Welch F test indicated a significant difference (p = 0.039) among the groups, suggesting variations in the measured parameter.Statistical significance was determined using one-way ANOVA, with Bonferroni's correction.(*: P ˂0.05; **: P ˂0.01). Fig. 5 . Fig. 5. : Box plots illustrating the time spent on the closed arm (Graph A) and time spent on the open arm (Graph B) in the elevated plus maze.Graph A: The boxes indicate the interquartile range (IQR), with the horizontal line inside representing the median.The whiskers extend to the minimum and maximum values, excluding outliers, which are depicted as individual data points beyond the whiskers.Graph B: The box plot (B) depicts the distribution of time spent on the open arm across the eight experimental groups.Statistical significance was determined using one-way ANOVA, with Bonferroni's correction.(*: P ˂0.05; **: P ˂0.01). Fig. 6 . Fig. 6. (A-B) 3-dimensional image of astrocytes labeled.(C) A 2D image of astrocytes labeled with anti-GFA.(D) Violin box plot depicting the distribution of the measured variable among the eight experimental groups.The violin box plot displays the variation in the measured variable across the control group and seven other experimental groups.Each group is represented by a violin-shaped distribution, where the width represents the density of data points at different values.The white dot within each violin represents the median, and the thick horizontal line denotes the interquartile range (IQR).The thin lines extending from the violins indicate the range of the data, excluding outliers.Statistical significance was determined using one-way ANOVA, with Bonferroni correction.(*: P ˂ 0.05; **: P ˂ 0.01). Fig. 7 . Fig. 7. (A) The line graph depicts the changes in corticosterone levels measured at different time points (weeks 3, 6, 9,12 and 15) in the experimental group.Each data point represents the mean corticosterone level, and the error bars indicate the standard deviation.The corticosterone levels showed fluctuations over time, with an initial increase at week 6, followed by a gradual decrease until week 12.At week 15, the corticosterone levels reached their lowest point.The graph illustrates the dynamic nature of corticosterone levels and provides insights into the temporal patterns of corticosterone regulation.(B) Box plot indicating corticosterone levels Statistical significance was determined using one-way ANOVA, with Bonferroni's correction.(*: P ˂ 0.05; **: P ˂ 0.01). Fig. 8 . Fig. 8. : Plasma Dopamine Concentration Among Eight Experimental Groups Exposed to ELS and Parental Psychopathology.The figure illustrates the plasma dopamine concentration, measured in picograms per milliliter (pg/ml), among eight distinct experimental groups.Statistical significance was determined using one-way ANOVA followed by Bonferroni correction (*p < 0.05, **p < 0.01) to assess group differences in plasma dopamine levels.Error bars represent standard error of the mean (SEM) values. 1 . College of Health Science scholarship at The University of Kwa-Zulu Natal.2. Department of Human Physiology, Faculty of Health Sciences North West University, South Africa Developing Research, Innovation, Localization and Leadership in South Africa (DRILL) Fund.
2024-05-19T13:29:37.493Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "9f21649e34f60ad1d03e6ad5f75d597d05d5e651", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1016/j.bbr.2024.115053", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e11d66d426ab326d865da33e9775a3090edeb4b8", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
65274293
pes2o/s2orc
v3-fos-license
Study of iterations in the design process of a product for automotive industry This paper present an experiment realized in order to observe the iterations in a design process. The main objective of this work is to study the iterations during the design process using a laboratory experiment, in order to understand how and why iterations occur. The different forms of iterations as they occur in practice are identified. This study will help us in the classification of iterations in order to distinguish useful iterations from undesirable ones. The results of the study might be used to improve the manner of working in the field of engineering design. Introduction In this paper, a laboratory experimental was used to make observations about the iterations in a design process. Iteration is a fact of life in any project. The larger, more novel and more interconnected a project, the more of an issue it can be [1,2]. Iteration can be defined to be the repetition of design tasks due to the arrival or discovery of new information [3]. For the industries, engineering design is a source of competitive advantage. One of the key factor for the success for a corporation is the reducing the development time of the product development process. For product developers, it is very important to understand iterations, so you can manage them through the design process. But, the design process is a complex and dynamic one. In order to understand the design process, the experts have used a big number of methods and models. Most models are complementary each to other, the approaches used vary depending on the context, on the vision of experts and on the scope. Unfortunately, many of the models are developed based on intuition and practice. The experiment presented in paper aims at observing and understanding iterations production, the effects of these iterations, how they can be anticipated etc. This paper is organised as follows: a literature review of design process models integrating the iteration aspect of engineering design is presented. After this literature review, the research method used in this study and the experiment are presented. The next section presents the experimental data and observations, and their analysis. The conclusion of the paper present a summary of our work and a presentation of future developments. Literature review A way of understanding and acting on the design process is through its modelling. The work of developing a model provides a better understanding of the functional and behavioural characteristics of the design process. The use of the model makes it possible to define, test and improve strategies for the acquisition and engagement of design activities. The literature offers us a wide range of models. These models can be classified into two main families: prescriptive models and descriptive models [4]. Most of the models encountered and which can be used to study iterations are prescriptive models. Smith and Eppinger [3] propose a model for the description of sequential processes with the presence of iterations. The model is used to find an initial order of activities leading to a minimal duration of the design process. Due to the assumptions adopted concerning the duration of the activities and the probabilities of questioning which are considered as constant and previously known. The model developed by Krishnan [5] is used to manage the paralleling of two design activities. Downstream activity starts with partial information from upstream uncompleted activity. This can lead to poor premature design choices leading to loss of quality or causing long and costly iterations. The authors identified two characteristics of design process activities, called sensitivity and evolution, that allow us to choose the optimum degree of overlap to be adopted by determining information that can be frozen early in the design process and which information is to be used in a design process. Preliminary form. The limitations of this model are that it considers only the problem of overlapping only two activities. Moreover, it does not take into account the feedback from the downstream activity to the upstream activity and only downstream activity is reiterated in the process. Yassine [6] also developed a mathematical model of two design activities that can be sequential, parallel, or partially overlapping. The duration of the iterations and their number are represented by random variables with a known distribution. The model calculates the total duration of the process. Numerous research papers have approached the problem of iterations in the design process from close and distant. Indeed, these contribute greatly to the extension of development times. Osborne [7] reports that iterations account for between 13% and 70% of the total development time for Intel's semiconductor design activities. It also reports that the variation in development time is mainly due to iterations. In their work, Pahl and Beitz describe the design process as a succession of phases [8]. They define iterations as the process by which a solution is approached step by step. They take place between the different phases and often within each phase. In this case, the iterations make it possible to refine a design solution. This procedure can be assimilated to that used in mathematics for solving an equation or a system of equations starting from an initial solution used to calculate another more precise solution which will in turn be used until convergence Towards an acceptable solution. This type of iteration is often encountered in interdependent design activities. For example, in order to carry out the activity C, it is necessary to know the value of the parameter y provided by the activity B. To calculate this parameter, the activity B in turn needs the value of the parameter x supplied by the activity A, but this needs the result of the activity C represented by the parameter z. In this case, an iterative process is required to compute all the parameters. As a brief conclusion, in the literature, the iteration:  is needed to solve complex problems [9];  may help to deal with a changing context [9];  may be reduced by a focus on central information-consuming/generating nodes [2];  is strongly influenced by small changes in task time if close to capacity [10]. Design Experiment The activity of design is a social one. In order to understand all the aspects of the design process, it is useful to use experimentations in order to observe how the designers interact, and how to process progress. Design could be accomplished in different situations: by an individual designer, or by a design team or several teams. The members of a design teams could work synchronously or asynchronously; they also could be geographically distributed [11]. The main objective of observations is to understanding cognition, creativity and innovation in the design process. The role of our study is to observe iterations in a design activity performed by a multidisciplinary team. The observational techniques were used to record the design process. The records were used to make different study of the activity of participants. The experiment described in this paper was developed at the University of Pitesti. The theme of the experiment was to design a roof lighting system for an SUV for intervention in situations of disasters or for the frontier service. The system must be able to adapt to all the atmospheric condition: clear sky, snowing, raining, fog, smog, pollution. The experimental subjects were 4 PhD, three from the Department of Manufacturing and Industrial Management and one from Department of Electronics, Computers, Communications and Electrical Engineering. Each participant was given a distinct role. There were roles for a mechanical designer (Catia V5), an electrical and programmer engineer (Arduino), a risk analyst, and a mechanical specialist. The design team was led by a leader. The experiment lasted for two weeks and there were 4 design sessions. All the sessions were recorded in order to fully capture the activity of design. In addition to these sessions, the designers performed their own tasks individually and asynchronously, but they could change information with others. Observations and results In this paper, it is presented the third working session, which corresponds to the maturity of the product "design embodiment." In this session, they were identified a great number of iterations. The design tasks performed by the designers were established and then were identified all the iterations based on the links between the tasks, by use of DSM method, figure 3. Design Structure Matrix (DSM) based models [3,6] have been used to represent the iterative structure of engineering design. DSM uses a matrix representation of the design process. The DSM matrix is square with one task by column and by row. Information flows between tasks are indicated in the off-diagonal elements of the matrix. Two types of information flows are distinguished: feed forward (lower diagonal elements) and iterations (upper diagonal elements). With this representation, cyclic information flows are easily captured and the need for iterations is identified. The sources of the iterations After the analysis, we identified three main sources of iterations: The change of objectives. During the design process, some initial data or solutions already proposed can be changed at any time for different reasons, leading to a repetition of a number of design tasks. Interdependence between the tasks. These are mutually dependent tasks, in this case several iterations are necessary to arrive at an acceptable solution. In this case, the total delay of the design process often depends on the initial scheduling of the interrelated design tasks. Design errors. These errors are all the more important as the design process is more and more complex and involves more and more people, especially in the context of simultaneous engineering, where several different trades are taken together. The typology of iterations The iterations can be classified according to a multi-dimensional typology. We propose a typology with four classification criteria. Voluntary / involuntary iterations. Voluntary iterations are due to interdependent activities and sometimes changes in objectives. Involuntary iterations are mainly due to design errors. This classification makes it possible to better predict the actions to be taken to reduce the development time of the product. The reduction of iterative processes can be achieved by better structuring the activities of the design process: decoupling interdependent activities, better scheduling of design activities, reduction of coordination time between design teams, and so on. However, the reduction of iterative processes can increase the risk of failure of the solutions developed and must therefore be considered with caution. Short/long iterations. An iteration can be considered short or long. This classification makes it possible to evaluate the impact of iterations on the time and/or the overall cost of the activities involved. This evaluation requires a form of quantification linked to the attributes of the activities (duration, cost) or the number of iterated activities. In this case, we must study the compromise between several short iterations and a limited number of long iterations. Positive/negative iterations. Some iterations produce value, but not all of them. The iteration that can be eliminated without loss of value is considered a negative iteration. This classification makes it possible to specify how to handle the iterations (elimination, better control, reduction, etc.). Fast/slow iterations. In this case, it is the speed at which a design task or part of a task is repeated to be in conformity with others or to correct errors. In figure 4 the iterations are classified according to their sources. Conclusions Experiments provide data needed to understand the design process. In this study, we presented an iterations analysis on a design experiment. Regarding changes of objectives, the iterations voluntary is more numerous because it is needed to explore the space of solutions in order to meet the new requirements of the design. Thus, a large number of short iterations, positive, is engaged to improve the chosen solution. A short iterations character makes iterative process to be done quickly. It is very interesting that in the case of interdependence between the tasks, the involuntary iterations do not occur, which seems natural in theoretically. All these iterations are entered voluntarily, aiming to us "closer" to the final stage of the product. In terms of design errors, it generates long iterations, affecting the duration of the design process. Unfortunately, these sources of iterations cannot be predicted, their appearance being made randomly. This type of analysis of a design experience aids the project manager to build and distribute design teams, to optimize the design of a product in the new context of globalisation.
2019-02-17T14:05:14.721Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "9d075a21da39d1c903cf9e80881569d4d9854b3f", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2017/26/matecconf_imane2017_09011.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7b686d6a897af42a4433096878ebda2b6a1ac8b9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
6600047
pes2o/s2orc
v3-fos-license
What Makes Deeply Encoded Items Memorable? Insights into the Levels of Processing Framework from Neuroimaging and Neuromodulation When we form new memories, their mnestic fate largely depends upon the cognitive operations set in train during encoding. A typical observation in experimental as well as everyday life settings is that if we learn an item using semantic or “deep” operations, such as attending to its meaning, memory will be better than if we learn the same item using more “shallow” operations, such as attending to its structural features. In the psychological literature, this phenomenon has been conceptualized within the “levels of processing” framework and has been consistently replicated since its original proposal by Craik and Lockhart in 1972. However, the exact mechanisms underlying the memory advantage for deeply encoded items are not yet entirely understood. A cognitive neuroscience perspective can add to this field by clarifying the nature of the processes involved in effective deep and shallow encoding and how they are instantiated in the brain, but so far there has been little work to systematically integrate findings from the literature. This work aims to fill this gap by reviewing, first, some of the key neuroimaging findings on the neural correlates of deep and shallow episodic encoding and second, emerging evidence from studies using neuromodulatory approaches such as psychopharmacology and non-invasive brain stimulation. Taken together, these studies help further our understanding of levels of processing. In addition, by showing that deep encoding can be modulated by acting upon specific brain regions or systems, the reviewed studies pave the way for selective enhancements of episodic encoding processes. INTRODUCTION Whether we remember an event or not depends on a set of mental processes and brain mechanisms that occur during the initial encoding of the event, its subsequent retrieval, and consolidation processes that take place between encoding and retrieval. Among the factors that act upon encoding, the level to which an item is cognitively processed largely affects memorability. This levels of processing (LOP) framework was originally proposed by Craik and Lockhart in 1972 (1), and has since then fueled debate in episodic memory research. In a typical experiment, depth is manipulated by asking participants to engage deep or shallow processing on the to-be-remembered items during encoding (2). For instance, judging whether a word represents a living or a non-living entity is a deep encoding task because it requires semantic analysis and access to the meaning of the word. By contrast, judging whether a word contains a given letter is a shallow encoding task as it only requires structural and phonological analysis. Other shallow encoding tasks, such as syllable, rhyme, or pleasantness judgments, involve an intermediate level of analysis along the structuralsemantic axis. Typically, items encoded using semantic operations are better remembered in a subsequent memory test than items encoded using shallow operations at any level of depth (2). LOP effects affect later performance even in the absence of any deliberate intention to learn, and are in fact most frequently studied using unintentional encoding. The superiority of memory performance after deep encoding is not only one of the most robust findings in episodic memory research, but it is also clearly recognizable by most experimental participants, and both these factors contribute to the intuitive appeal of the LOP framework. In general, theorists agree that deep encoding results in more elaborate memory traces, and that this in turn affects later memorability. But what exactly constitutes an elaborate memory trace, and what are the mechanisms that make elaborate memory traces more memorable? The psychological literature has emphasized that enhanced distinctiveness and integration with pre-existing knowledge are among the factors that contribute to the memory benefit for items that received deep processing at encoding (3,4). Yet, it is not entirely clear what are the exact mechanisms underlying LOP effects, and how they are instantiated in the brain. Some authors argue that neuroimaging does not help explain the "experience of memory" and that the debate on LOP effects should remain within the boundaries of experimental psychology (5). However, there are a number of relevant questions that a cognitive neuroscience perspective can help address. For instance, knowing which brain regions are activated by deep and shallow encoding and how these activations relate to subsequent memory may inform on the specific processes at play during the two types of encoding, and on the nature of the differences (e.g., qualitative vs. quantitative) between them. Psychopharmacological or noninvasive brain stimulation (NIBS) interventions may add insights into what modulates neural activity associated with deep and shallow encoding in these brain regions, and the relative contribution of encoding and retrieval operations to LOP effects. This work draws from the cognitive neuroscience literature to describe, first, some of the key neuroimaging findings in this field, and the main questions related to the LOP framework that have been addressed in the past few years of research. I will focus on investigations of neural activity associated with successful episodic memory encoding, therefore on those studies that analyzed neural activity associated with deep and shallow processing at encoding as a function of later memory performance. Clearly, whether a given item will be remembered or not also depends on a set of processes that are specific to the retrieval situation, such as the way memory is probed, and the similarity between encoding and retrieval contexts (6,7). However, an examination of retrievalspecific mechanisms is beyond the aims of the current review, and brain activations associated with retrieval success and recognition memory judgments will not be discussed. I will then review recent findings from CNS-active psychopharmacological interventions that helped clarifying the nature of processes involved in deep and shallow encoding. Finally, I will discuss how NIBS holds promise for future studies aiming at investigating LOP effects, and maybe pave the way for selective enhancements of episodic encoding processes. NEURAL CORRELATES OF EFFECTIVE DEEP AND SHALLOW ENCODING Since the advent of event-related neuroimaging, several studies have investigated the neural correlates of successful memory formation, which involves a comparison of brain activity at encoding, separately for items that are remembered or forgotten in a subsequent memory test. The rationale for this subsequent memory procedure (8) is that determining the neurobiological processes that influence whether an event will be memorable is of vital importance for the understanding of episodic memory. Functional magnetic resonance imaging (fMRI) investigations have consistently reported subsequent memory effects in the ventral and dorsal prefrontal cortex (PFC), medial temporal lobe (MTL), including the hippocampus and parahippocampal regions, and parietal cortex [for reviews see Ref. (9)(10)(11)(12)]. In human electrophysiological studies, successful memory formation is indexed by positive-going event-related potentials (ERPs) recorded over anterior scalp sites, and a complex pattern of brain oscillations [for reviews see Ref. (13,14)]. Only a few studies however (15)(16)(17)(18)(19)(20)(21)(22)(23)(24) investigated differences in neural activity associated with deep and shallow encoding tasks as a function of subsequent memory performance. Most of these studies aimed to investigate whether deep and shallow processing leading to successful encoding differ qualitatively or quantitatively. In other words, whether they are expression of distinct mnemonic mechanisms, or, rather, different levels or strengths of a single encoding mechanism. In terms of brain substrates, this translates into the question of whether episodic encoding relies on a single neural system irrespective of encoding task, or it is supported by multiple, task-specific systems. As a general standpoint, this question is complex because it requires a precise separation between deep and shallow encoding, which is in practice hard to achieve. The answer is, indeed, not easy. On the one hand, a good number of studies have demonstrated that a largely similar set of brain regions is implicated in successful deep and shallow encoding (15,16,19,20). More specifically, these studies have shown that the brain regions associated with shallow encoding are a subset of those engaged in deep encoding, with no brain region uniquely associated with the former (15,16,19). For instance, Otten et al. (19) demonstrated that remembered words that were deeply studied showed fMRI activations in bilateral inferior frontal gyrus and left anterior and posterior hippocampus, while words encoded in the shallow task elicited subsequent memory effects only in the anterior hippocampus and in a smaller portion of the left inferior frontal gyrus. Evidence of a quantitative, rather than qualitative, difference between deep and shallow subsequent memory effects was also demonstrated in ERPs and magnetoencephalography studies (21,22,25). These findings may suggest that memory formation relies on a single neural system, irrespective of the encoding task. In contrast with this view, two fMRI studies (17,24) found subsequent memory effects specific for shallow encoding in posterior brain regions, involving the bilateral posterior sulcus, bilateral fusiform gyrus, and left occipital gyrus (17), and an increased functional connectivity between the right hippocampus and the right DLPFC-parietal network (24). However, it should be noted that both Otten and Rugg (17) and Schott et al. (24) used a syllable judgment encoding task (reporting the number of syllables that compose a word). This task, while admittedly shallow, is at an intermediate level of depth compared to the alphabetic task used in the studies reviewed so far. In addition, syllable judgments involve processes that rely on posterior brain regions, such as counting or inferring the number of syllables from the length of the word (26), with only limited engagement of the left PFC (27). Subsequent memory effects in parietal areas associated with a syllable judgment encoding task have indeed been reported before (28). It thus appears that memory formation for syllable judgments involves specific brain regions, which support the online encoding task, whereas other shallow encoding tasks, such as alphabetical judgments, may engage brain activations in prefrontal areas (29), and therefore overlap with those associated with deep encoding. This leads to another relevant question addressed by neuroimaging research, which is central for the understanding of the mechanisms underlying LOP effects, that is, the overlap between task-specific and subsequent memory related activations. Neuroimaging studies have consistently shown that task-or stimulus-specific brain regions activated during encoding (e.g., areas selectively activated by semantic and structural processing, or by a specific class of stimuli, such as faces) also demonstrated subsequent memory effects (15-17, 19, 21, 30). For instance, the signal increase associated with the deep encoding task in left inferior prefrontal and MTL regions mirrored the signal increase that emerged in the subsequent memory contrast for deeply encoded items within the same regions (15-17, 19, 21, 24). Notably, a recent functional connectivity study found increased connectivity between the left PFC and the hippocampus associated with both the semantic task and subsequent memory for deeply encoded stimuli (24). Analogous results in posterior brain regions were demonstrated for shallow encoding (17,24). These findings crucially suggest that memory formation engages the activation of a subset of brain regions that support online, task-specific processing. In other words, effective episodic encoding is supported by products of the processing engaged by the encoding task. One hypothesis is that during deep encoding, semantic elaboration supported by the left inferior PFC (27,31,32) automatically activates pre-existing knowledge and semantically associated information about the item, perhaps through a temporary semantic working memory system (15,17,32). The subsequent memory effects in the left inferior PFC and functional connectivity with the hippocampus observed for deep encoding may thus reflect the benefits of incorporating these semantic associations with the studied item into a unique representation of the study episode (10). In other words, during deep encoding items are bound to the contextual aspects of the study episode, which is one of the key components to form a coherent episode in memory (33). It is reasonable to assume that this mechanism is at least in part responsible for the superior memory performance, but also for the higher proportion of confident and recollection-based responses, associated with deep encoding (34). In contrast, shallow encoding tasks that heavily rely on structural processing, such as judging whether a word contains the letter "E," do not engender a sufficiently deep level of analysis to allow associative and contextual processes unfold, and therefore the engagement of relational processes and MTL structures would only be minimal. In between semantic and structural encoding tasks, episodic records for syllable judgments could perhaps incorporate some information derived from the encoding task, such as the word length. This could be reflected in increased functional connectivity between the PFC and the hippocampus for shallow encoding (24). Taken together, the neuroimaging findings reviewed so far complement and extend previous knowledge on LOP effects, and on episodic memory in general. They suggest that effective deep and shallow encoding may be qualitatively or quantitatively different, depending on the specific processes that are active during the encoding task, and substantiate the idea that the episodic memory of an event is a byproduct of these processes (35). Task-specific and relational processing at encoding, associated with corresponding brain activations, may be ways in which memory formation for deeply encoded items is enhanced. SELECTIVE MODULATION OF MEMORY FOR DEEPLY ENCODED EVENTS: EVIDENCE FROM PSYCHOPHARMACOLOGICAL STUDIES Episodic memory is modulated by a number of neurotransmitters and CNS-active drugs. Studies that investigated the effects of pharmacological interventions on LOP vary with respect to the pharmacokinetic and pharmacodynamic properties of the drug, the dose, and time of administration with respect to the memory phase (encoding or retrieval), and the memory test used to probe memory. That said, there is sufficient commonality in the studies to allow some comparison and integration. One of the most widely studied neurotransmission systems in relation to memory is the neocortical cholinergic system. Acetylcholine (ACh) projects from the basal forebrain to the cortex and the hippocampus, which contains one of the highest densities of cholinergic terminals and receptors (36). The PFC also shows dense cholinergic innervation (37). Given the key role of these brain structures in learning and memory (9,10,12), the modulation of memory functions by ACh is not surprising. Although the effect of pro-cholinergic drugs is not consistent across studies on healthy young and elderly participants (38,39), acetylcholinesterase inhibitors enhance episodic memory performance in patients with Alzheimer's disease (40), and are routine symptomatic treatments for memory decline in this clinical condition. A few studies have investigated behavioral and brain activation patterns associated with LOP effects following administration of drugs acting on cholinergic pathways, namely the acetylcholinesterase inhibitors Donepezil and physostigmine (30,41), and nicotine (42). In all these studies, memory accuracy increased following drug administration. In addition, the cholinergic neuromodulation interacted with LOP at encoding, as the memory enhancement was restricted to deeply studied stimuli, while leaving memory accuracy for shallowly encoded items unaffected. This may appear surprising at a first glance -ultimately, deeply encoded items should be less vulnerable to modulations as they involve stronger memory traces. So why would cholinergic effects act upon deeply, but not shallowly, encoded items? A recent fMRI study by Bentley and colleagues (30) offers a plausible explanation. In this study, elderly individuals and Alzheimer's patients received physostigmine or placebo during deep and shallow encoding of images depicting faces or buildings. Volunteers had to judge whether a particular face or building was old or young in the deep encoding task, or whether the image was red or green in the shallow encoding task. For face stimuli, the results showed that in elderly individuals physostigmine increased subsequent memory performance for deep, but not shallowly encoded items. This behavioral advantage was associated with increased activations during deep encoding in the face-selective fusiform cortex, and with increased functional coupling between the fusiform cortex and the right hippocampus. In contrast, in Alzheimer's patients physostigmine did not induce task-dependent behavioral or brain activation changes. These findings substantiate the neuroimaging findings reviewed in the previous section by showing that effective deep encoding is supported by the activation of online, taskor stimulus-specific areas, and by their connections with MTL structures. They further extend previous evidence suggesting that the cholinergic system could be a crucial mediator of this effect. One caveat of the cholinergic studies reviewed so far is that the effect of the drug covered both encoding and retrieval. Given the well-known diverging effects of pro-cholinergic drugs on encoding and retrieval operations (43), future studies could investigate whether the interaction between LOP effects and ACh is dependent upon the time of administration with respect to the memory phase. www.frontiersin.org Whereas ACh generally facilitates episodic memory, other neurotransmitter systems are associated with reduced memorability. Ketamine, an antagonist of the N -methyl-d-aspartate (NMDA) receptor, and inhibitory neurotransmitters of the gamma-aminobutyric acid (GABA) system, such as benzodiazepines, induce drastic decreases in memory performance (44)(45)(46). For instance, the facilitation of GABA inhibits the functioning of the hippocampus, inducing dose-related decrements in episodic memory (47). At the neural level, the memory impairment is accompanied by encoding-related deactivations following benzodiazepine administration in the left dorsal PFC (48), left inferior PFC, and hippocampus (49). The modulation of memory performance and brain activations following ketamine and benzodiazepine administration is probably dependent upon the dense concentration of their receptor sites in the hippocampus and cerebral cortex (50,51). With respect to LOP, the effects of drugs with sedative and amnesic effects have been fairly inconsistent. Lorazepam and ketamine administration was associated with decrements of recognition memory accuracy, selectively for deeply encoded items, or items with an intermediate level of depth (52)(53)(54). However, studies from the same groups using similar doses and procedures showed no interaction between drug effects and LOP (45,55,56). It is not clear how to reconcile these diverging findings. Nevertheless, it is worth noticing that similar to the effects of ACh, the effects of ketamine, and benzodiazepines, if any, act upon deep but not shallow encoding. One could speculate that because of the dense populations of NMDA and benzodiazepine receptors in the frontal cortex and hippocampus, and the extensive recruitment of these brain structures in deep encoding, it is more likely that any disruption would affect deep encoding to a larger extent. Interestingly, Honey et al. (45) demonstrated that following ketamine administration brain activity in the left ventrolateral PFC associated with a deep, compared to a shallow, encoding task increased. This suggests that ketamine may selectively affect task-specific processing that supports successful memory formation. Investigations using other drugs with sedative actions produced additional divergent results, with no interaction between drug and LOP [barbiturates: Ref. (57)], and again selective impairment for deeply encoded items [anesthetic propofol: Ref. (58,59)]. Finally, and surprisingly given its strong influence on memory (60), cortisol does not seem to interact with LOP. However, the effects of cortisol largely vary depending on dose, timing of administration relative to the memory phase, time of the day of testing, emotional content of the stimulus, and arousal state at the time of testing (61,62). The relation between cortisol and memory is thus very complex, and future studies may find an effect of cortisol on LOP when controlling for some of these variables. The body of work summarized here suggests that neurotransmitter systems such as cholinergic, GABA-ergic, and NMDA systems have a non-generic sedation or enhancing effect on episodic memory. Perhaps because of their pattern of receptor innervation in the brain, ketamine, ACh, and benzodiazepines selectively affect the memorability of items encoded using deep operations. The modulation of neural activity in brain regions that support the online encoding task may be one way in which CNS-active drugs act upon memory formation of elaborate memory traces. This discussion emphasizes the need of further research on the specific mechanisms that contribute to drug-induced improvements or decrements of episodic memory. NEUROMODULATION OF DEPTH OF PROCESSING BY NON-INVASIVE BRAIN STIMULATION: EMERGING EVIDENCE Functional magnetic resonance imaging and electrophysiological techniques are inherently correlational, therefore, it is not possible on the basis of their data alone to determine whether neural activity is necessary to a specific task. NIBS techniques instead can provide information on the causal role of a specific brain region in a given cognitive process. Transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) are the most widely adopted NIBS techniques in the investigation of memory functions. Transcranial magnetic stimulation uses a magnetic field to induce changes in the resting potentials of the underlying cortex and thus in its electrical currents. This determines a transient interruption of the normal brain activity and interference with cognitive processing (63). TMS can be delivered as a single pulse, or as a series of single pulses (repetitive TMS, rTMS), and can have facilitatory or inhibitory effects depending on the frequency of stimulation. In contrast, tDCS delivers constant, low-intensity (up to 2 mA), electrical currents to the scalp via two large anode and cathode electrodes (64). The current modifies resting membrane potentials and the spontaneous firing rate of neurons in a polaritydependent fashion, without however inducing action potentials (65). Because of their distinct physiological mechanisms, TMS and tDCS differ in the type of information they provide. TMS stimulation is focal, whereas the spatial resolution of tDCS is limited. In addition, TMS is generally locked in time with stimulus presentation, or other events of interest. The temporal dynamics of the engagement of a given brain region can thus be identified by observing the effects of TMS in this region at different points in time (66)(67)(68)(69). In contrast, tDCS is not locked to the presentation of single events, rather it is delivered over a prolonged period of time off-line or during the task. To investigate memory formation, TMS or tDCS is typically delivered over the target area and one or more control sites during encoding, and subsequent memory performance is then assessed as a function of the stimulation condition. On the whole, NIBS studies confirmed previous fMRI and PET evidence of the key role of the PFC in episodic memory formation, either in the dorsal (68,(70)(71)(72)(73)(74)(75)(76)(77)(78) or ventral (66,67,(79)(80)(81)(82) portions. It is worth remembering that as the depth of stimulation is limited to a few centimeters, TMS and tDCS cannot directly stimulate some of the key regions involved in episodic memory formation, such as MTL structures. However, a recent neuroimaging study (83) has shown that the stimulation can modulate intrinsic brain network dynamics and propagate to distal brain structures, including the hippocampus. The majority of brain stimulation studies only adopted one encoding strategy, consisting of a semantic judgment. To date, only two TMS studies directly compared deep and shallow encoding tasks and their effect on memory (76,84). Innocenti et al. (76) delivered 10 Hz rTMS to the left and right dorsolateral PFC during a semantic and an alphabetical judgment encoding task. The effect of the stimulations on subsequent memory performance was compared with the stimulation of a control site and a no-TMS condition (baseline). Consistent with previous studies (68,(70)(71)(72)(73)(74)(75), rTMS delivered over the left dorsolateral PFC decreased recognition accuracy compared to the other stimulation conditions. However, this effect was specific to semantically encoded words. In the study by Vidal-Piñeiro et al. (84) instead, memory for deep and shallow encoding was equally unaffected by the off-line theta burst stimulation (TBS) of a more ventral region of the PFC. However, as evidenced by a post-TMS fMRI scan, TBS increased activations of the left ventrolateral PFC, occipital cortex, and cerebellum, and the connectivity between these brain regions, while volunteers were performing the deep encoding task. These findings suggest that the combination of neuroimaging and brain stimulation offers relevant insights into the brain networks involved in LOP effects, even in the absence of overt behavioral effects. There are several methodological differences between these two TMS studies that may have determined the discrepancy of behavioral effects, including differences in the protocol and site of stimulation. Nevertheless, once again the literature offers a scenario in which the neural or behavioral modulation is specific to semantic encoding. Along the same vein of what has been suggested for psychopharmacological studies, neuromodulation with TMS may interfere with task-specific and associative processes that support the online semantic task. Unfortunately, performance for the semantic encoding task is generally at ceiling, and this makes it hard to detect any effect of neuromodulation at encoding. In fact, investigations that used a neuromodulatory approach either did not report performance data for the encoding tasks, or reported a lack of effects. In summary, TMS holds promise for future investigations of LOP effects. The possibility to selectively interfere with specific stages of memory (encoding vs. retrieval) makes this technique an excellent candidate for the study of the relative role of encoding and retrieval operations in determining LOP effects. Further studies using different stimulation protocols, sites, and timings are needed to expand our knowledge of selective effects of TMS on deep encoding. Electrical stimulation with tDCS could also further our understanding of LOP effects. For instance, the differential effects of anodal and cathodal tDCS on episodic memory performance (85) may induce dissociations in LOP effects, thereby adding to the investigation of the nature of the differences between deep and shallow encoding. In addition, given that anodal tDCS induces enhancements in episodic memory performance in healthy young and elderly individuals (78), it will be of great interest in future studies to assess whether tDCS can selectively induce memory enhancements according to the depth of encoding. This could be relevant especially for those pathological conditions that are characterized by memory impairments of deep but not shallow encoding (86)(87)(88). Finally, the observation that subsequent memory for deep and shallow encoding is associated with different patterns of oscillatory brain activity (23,89,90), will provide impetus for the investigation of the effects of rhythmic brain stimulation (rTMS and transcranial alternating current stimulation) on depth of processing. CONCLUSION In this article, I reviewed the contribution of neuroimaging, psychopharmacological, and NIBS studies to our understanding of LOP effects. Taken together, the findings discussed here provide partial answers to the question of "what makes deeply encoded items more memorable?" They suggest that memory formation for deeply encoded events is enhanced when the products of online, task-specific processing are integrated with pre-existing knowledge about the event into a coherent episodic memory trace. At the neural level, this is reflected in overlapping task-and encoding-related brain activations, and their functional connections with MTL structures. These findings therefore converge with the psychological literature, which has previously suggested that the episodic memory of an event is a byproduct of the processes active during encoding (35), and that the integration with preexisting knowledge contributes to the memory benefit for items that receive deep processing at encoding (3,4). Crucially, these cognitive and neural processes are mediated by activity in cholinergic, GABA-ergic, and NMDA neurotransmitter systems, which analogously to NIBS, specifically modulate memory formation for deep encoding. The proposed mechanism may not be exclusive to deep encoding per se. Rather, it may generalize to shallow encoding task that are of sufficient depth to induce associative process in the formation of the episodic record. It is important to note that a process-based account need not be the only explanation for the specificity of the effects for deep encoding. For instance, the number of trials for shallow encoding in any given subsequent memory comparison is generally small due to low memory performance. Therefore, the power to detect any statistical difference in this condition is low. One could speculate that distinct subsequent memory patterns would emerge if a shallow encoding task that yields higher memory accuracy was used. In this view, the posterior subsequent memory effects for shallow encoding reported in Otten and Rugg (17) and Schott et al. (24) could be attributable to the higher number of trials in the syllable judgment encoding task, rather than to the specific processes involved in this task. Distinguishing between these alternative views will be difficult, but future studies could address this issue through careful examinations of how systematic variations of encoding tasks yielding different levels of memory accuracy correspond to linear changes in brain activation patterns. Finally, one should emphasize that the mechanisms enhancing memory formation for deeply encoded events reviewed here provide only part of what is needed to accurately remember those events, that is, they provide the potential of retrieval (4). Equally important are the processes that occur during retrieval, the way memory is later tested, and the overlap of the encoding and retrieval situation (6,7). The possibility to selectively interfere with different stages of the memory process make neuromodulatory approaches excellent candidates for the investigation of the interdependence of encoding and retrieval operations.
2016-05-12T22:15:10.714Z
2014-04-22T00:00:00.000
{ "year": 2014, "sha1": "4f1cdad18cb7ed7b3b92389ed439092b997d8e4d", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2014.00061/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f1cdad18cb7ed7b3b92389ed439092b997d8e4d", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
230512334
pes2o/s2orc
v3-fos-license
Prediction-Aware Quality Enhancement of VVC Using CNN The upcoming video coding standard, Versatile Video Coding (VVC), has shown great improvement compared to its predecessor, High Efficiency Video Coding (HEVC), in terms of bitrate saving. Despite its substantial performance, compressed videos might still suffer from quality degradation at low bitrates due to coding artifacts such as blockiness, blurriness and ringing. In this work, we exploit Convolutional Neural Networks (CNN) to enhance quality of VVC coded frames after decoding in order to reduce low bitrate artifacts. The main contribution of this work is the use of coding information from the compressed bitstream. More precisely, the prediction information of intra frames is used for training the network in addition to the reconstruction information. The proposed method is applied on both luminance and chrominance components of intra coded frames of VVC. Experiments on VVC Test Model (VTM) show that, both in low and high bitrates, the use of coding information can improve the BD-rate performance by about 1% and 6% for luma and chroma components, respectively. I. INTRODUCTION Video streaming applications have gained more popularity in the past few years. Therefore, the task of delivering a high quality video has become essential. From the compression point of view, the upcoming video coding standards, in particular VVC, can achieve up to 50% bitrate saving compared to its predecessor HEVC [1]. Alongside the video coding progress, receiver devices have also become more powerful in processing received videos and enhancing their quality. As a result, video post-processing is nowadays an interesting option for display manufacturers in order to further improve the viewing experience of their users. The promising performance of machine learning methods has recently encouraged researchers to exploit them in the video compression domain. Particularly, deep Convolutional Neural Networks (CNN) have attracted more attention owing to their significant performance [2], [3]. Despite the interesting performance of CNN-based methods, they usually impose a high computational complexity which makes them unsuitable for real-time encoding applications. However, the postprocessing approaches which improve the reconstructed video after the decoding step can be more flexible, since they are not involved in the encoding and decoding process. In other words, Such post-processing approaches can serve as an optional step to be used based on the hardware capacity of the decoder device. CNN-based quality enhancement (QE) for VVC has been sparsely studied in the literature. The existing works target Fig. 1: Compressed video quality enhancement framework both intra and inter frames of coded videos. In [4]- [11], CNNbased methods have been proposed to the VVC standardization either as in-loop filter or post-processing step. Considering the fact that the distortion in compressed video is influenced by the encoding process and its decision making engine, an attention based network is proposed in [12], where partitioning information of VVC is exploited to further increase the performance of the QE filter. Finally, in [13], the impact of network architecture complexity on the performance of the QE filter has been studied. In this paper, a CNN-based QE method is proposed, which follows the objective of the previously presented works with the use of coding information [4], [12], [14]. The main contribution of this work is that we use the spatial predictor of each frame as the input to the CNN. This is motivated by the fact that coding information, such as intra prediction signal, usually represent a useful information about the type of the distortion [15]. Fig 1 presents the overall workflow of the proposed method. The input of the QE neural network is the decoded frame, the intra prediction information and the Quantization Parameter (QP). The CNN architecture of this paper is inspired by the network proposed in [16], which has shown great performance for the super resolution problem. Moreover, the three color components of each frame are processed separately. The rest of this paper is organized as follows. In Section II, the proposed QE method using intra prediction as coding information is presented. Experimental results as well as discussions and comparisons with state of the art solutions are provided in Section III and finally, Section IV concludes the paper. II. PREDICTOR-AWARE QUALITY ENHANCEMENT In this section, first we will explain the intuition and motivation for using intra prediction in the proposed CNN-based QE method. Then, network architecture and training configuration will be presented. arXiv:2112.04225v1 [eess.IV] 8 Dec 2021 A. Intra coding and compression artifacts In intra coding, each block is predicted based on its neighboring pixels, given some predefined models. In VVC, these models include a set of 67 Intra Prediction Modes (IPM), representing 65 angular IPMs, plus DC and planar. Like other decisions in video coding, the selection of an IPM for a block consists in optimizing a function of the rate and the distortion, called the rate-distortion (R-D) cost. Particularly for intra coding, the the R-D cost of an IPM i, denoted as J i , is computed as where D i and R i are the distortion and the rate, obtained when using i as the IPM of the block, respectively. Moreover, λ is the Lagrangian multiplier, computed based on the QP which determines the relative importance of the rate and the distortion during the decision making process. For instance, in low bitrates (high QP), the value of λ is higher, which indicates that minimization of the rate is relatively more important than minimization of the distortion. Strict bitrate constraints might cause a situation where the best IPM minimizing the R-D cost of a block, is not necessarily the IPM that models the block texture most accurately. Fig. 2 shows an example of such a situation in the first frame of the BQSquare sequence. In this figure, a 16 × 16 block, k, is selected and the Prediction (P k i ) and Reconstruction (C k i ) blocks corresponding to its two best IPMs in terms of R-D cost are shown. As can be seen, despite their similar R-D costs, these two IPMs result in very different reconstruction signals, with different types of compression loss patterns. This behavior is due to two different R-D trade offs of the selected modes. On one hand, IPM 38 is able to model the block content more accurately (i.e. smaller distortion D 38 ) with the cost of a higher IPM and residual coding rate (i.e. R 38 ). On the other hand, IPM 50 provides a less accurate texture modeling (i.e. high distortion D 50 ) with a smaller rate residual and IPM coding rate (i.e. R 50 ). Consequently, these two IPMs result in very different types of artifacts for the given block, as can be seen by comparing the corresponding reconstruction blocks (i.e. C k 38 and C k 50 ). The above example proves that the task of QE for a block, frame or an entire sequence could be significantly impacted by different choices of coding modes (e.g. IPM) determined by the encoder. This assumption is the main motivation in our work to use the intra prediction information for training of the quality enhancement networks. B. Proposed CNN-based quality enhancement method The proposed QE algorithm is applied on intra frames after decoding. In order to accurately capture the compression loss, as explained in previous section, the prediction information is also extracted from the decoder and is used as the input to the QE network. For each reconstruction frame, this prediction information is composed of predictors associated to its blocks. The predictor of each block is the projection of its reference pixels corresponding to the angle of the used IPM. The reconstruction and prediction frames are concatenated and fed to the network as one input image. Inspired by the architecture of the Enhanced Deep Super Resolution (EDSR) [16], we have exploited residual training in our QE network. The architecture of the QE network is shown in Fig 3. The first convolutional layer receives the reconstruction and prediction frames as input. In the next step, after one convolutional layer, 32 identical residual blocks, each composed of two convolutional layers, and one Relu layer in between, are used. The convolutional layers in the residual blocks have the same size as the feature maps and kernel size of first convolutional layer. In order to normalize the feature maps, a convolutional layer with batch normalization is applied after the residual blocks. A skip connection between the input of the first and the last residual block is used. Two more convolutional layers after the residual blocks are used. Finally, the last convolutional layer has one feature map which constructs the output frame. Given I = P ⊕ C as the concatenation of the prediction P and reconstruction C frames as input, producing the enhanced frameÔ, is formulated aŝ where F 1 (.) and F 2 (.) are 3×3×256 convolutional layer, with and without the Relu activation layer, respectively. Moreover, F 3 (.) is a 3 × 3 × 1 convolutional layer with Relu activation layer. The superscript of each function indicates the number The task of the training phase is to optimize the parameters θ QE of the above QE function, f QE , expressed aŝ The L 2 norm with respect to the original frame O is used as the cost function of the training phase In the proposed method, each color component of the decoded video (i.e. one luminance and two chrominance) is enhanced separately. For this purpose, one network for each component in different QPs is trained with the above network architecture and using the corresponding prediction signal of that component. III. EXPERIMENTAL RESULTS As the proposed post-processing module is designed to enhance the quality of intra coded frames, two image datasets of DIV2K and Fliker2K are used for training. All images in the datasets are encoded in All-Intra (AI) configuration of the VVC Test Model version 5.0 (VTM-5.0) [1], using 6 QPs, between 22 and 47. The prediction information is extracted during decoding process for all datasets in all QP ranges. The network was implemented and trained in pyTorch (1.4.0). For training, 64 × 64 patches of reconstruction and prediction frames were extracted randomly from the training dataset, with batch size of 32. The training started with the learning rate of 10 −4 which was then decayed by the scale of 0.1 for each 100 epochs until 500 epochs. At the end of the training, a total of 3×6 trained models were obtained for 3 components in 6 QPs. In order to evaluate our method, the test sequences of JVET CTC (classes A1, A2, B, C, D, E) were encoded with the VTM-5.0 with each of the 6 QPs with the AI configuration. To study the effect of QE method in different bit-rates, two QP ranges were evaluated: 1) the CTC QP: (22, 27, 32, 37), and 2) high QP: (32, 37, 42, 47). The performance of different benchmark methods were measured using the Bjontegaard delta (BD) bit-rate saving metric based on the PSNR difference with respect to VTM-5.0 with no QE as an anchor. Three state of the art VVC CNN-based QE methods are used as benchmark. First two methods are JVET contributions [4], [14] proposing QE methods as post processing. Both of these methods deploy a slightly simpler network architecture with the QP map and use the reconstruction signal as the only input to the network. To assess the benefit of using IPM as input to the network, we also present the results for the proposed method with only reconstruction frame as input (denoted "proposed -without prediction") Table I presents the performance of our proposed method against the anchor compared with the two benchmark methods. It can be seen that in the CTC QP range, the proposed method can achieve an average BD-rate gain of 6.7%, 12.6% and 14.5% on Y, U and V components, respectively. In the same QP range, it is also observed that the proposed method with the prediction signal outperforms the proposed method without the prediction signal by 0.9%, 8.1% and 4.8%, on Y, U and V components, respectively. Compared to the other two JVET solutions, the proposed method shows a significant gain, in the CTC QP range. At high QP range, where artifacts are significantly stronger, the only comparison is between the proposed method with and without the prediction signal. As can be seen in Table I, the proposed method can achieve an average BD-rate gain of 8.3%, 15.8% and 16.2% on Y, U and V components, respectively. Same as CTC QP range, the use of the prediction signal in high QPs also further increases the gain with an average BDrate of 1.3%, 7.1% and 3.5% on Y, U and V components, respectively. In both QP ranges, the achieved BD-rate gain of using the prediction signal is relatively higher for the U and V components than for the Y component. This can be due to [1]. The use of coding information such as intra prediction might enable the CNN-based QE to benefit from the existing correlations and more efficiently predict the compression artifacts. IV. CONCLUSION In this paper, a CNN-based quality enhancement method was proposed for VVC coded frames, that benefits from the coding information in the intra prediction signal of each frame. The experiments showed that using prediction information can significantly improve the performance of the CNN-based enhancement methods, both for luma and chroma components of intra frames. The best explanation for the observed improvements is that exposing the CNN training process to coding information of the sequences, along with their groundtruth original signal, helps them is learning the pattern of compression artifacts. Hence, when the networks are used for the QE task of actual compressed sequences, they can more efficiently recover the lost information.
2020-12-31T09:04:11.641Z
2020-12-01T00:00:00.000
{ "year": 2021, "sha1": "811dce7037fa44af2d47606e8e23e4eeadf18166", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2112.04225", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a01c1a06951ad24da3e17c199f63edfbe6a7859a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
7366154
pes2o/s2orc
v3-fos-license
Smoking, Smoking Cessation, and Measures of Subclinical Atherosclerosis in Multiple Vascular Beds in Japanese Men Background Smoking is an overwhelming, but preventable, risk factor for cardiovascular diseases (CVD), although smoking prevalence remains high in developed and developing countries in East Asia. Methods and Results In a population‐based sample of 1019 Japanese men aged 40 to 79 years, without CVD, we examined cross‐sectional associations of smoking status, cumulative pack‐years, daily consumption, and time since cessation, with subclinical atherosclerosis at 4 anatomically distinct vascular beds, including coronary artery calcification, carotid intima‐media thickness (CIMT) and plaque, aortic artery calcification (AoAC), and ankle‐brachial index. Current, former, and never smoking were present in 32.3%, 50.0%, and 17.7%, respectively. Compared to never smokers, current smokers had significantly higher risks of subclinical atherosclerosis in all 4 circulations (eg, odds ratios for coronary artery calcification >0, 1.79 [95% CIs, 1.16–2.79]; CIMT >1.0 mm, 1.88 [1.02–3.47]; AoAC >0, 4.29 [2.30–7.97]; and ankle‐brachial index <1.1, 1.78 [1.16–2.74]) and former smokers did in carotid and aortic circulations (CIMT >1.0 mm, 1.94 [1.13–3.34]; and AoAC >0, 2.55 [1.45–4.49]). Dose–response relationships of pack‐years and daily consumption, particularly with CIMT, carotid plaque, AoAC, and ankle‐brachial index, were observed among both current and former smokers, and even a small amount of pack‐years or daily consumption among current smokers was associated with coronary artery calcification and AoAC, whereas time since cessation among former smokers was linearly associated with lower burdens of all atherosclerotic indices. Conclusions Cigarette smoking was strongly associated with subclinical atherosclerosis in multiple vascular beds in Japanese men, and these associations attenuated with time since cessation. S moking remains the number 1 preventable cause of death worldwide, and it contributes significantly to cardiovascular diseases (CVD). 1 Despite a recent decline in smoking rates in Western industrialized countries, tobacco use is still very high in developing Asian countries as well as in industrialized ones, including Japan, with 30% of the male population lighting up. 1 Among many contributors for CVD, smoking is one of the leading avoidable causes, and therefore advancing a tobacco-free world is a key strategic priority in preventive medicine. The evidence linking smoking exposure with various CVD, including myocardial infarction, stroke, and aortic and peripheral vascular diseases, is clearly present, although the mechanisms accounting for these associations have not been fully elucidated. 2 Atherosclerosis is considered to be a critical player in the pathophysiology of smoking-induced CVD, 2 and these adverse effects would be reduced by smoking cessation. 3 Accordingly, in the view of public health and clinical resources, determining which measures of subclinical atherosclerosis are most influenced by smoking, and whether any of the relationship between smoking and these subclinical measures decrease with longer time since smoking cessation is of utmost importance. Precise and valid measures of subclinical atherosclerosis are available for coronary, carotid, aortic, and peripheral circulations; however, few population-based studies have assessed the associations of both smoking and smoking cessation with subclinical atherosclerosis, 4,5 and no studies, to date, have examined the association of smoking and smoking cessation with subclinical atherosclerosis in all 4 circulations. In a general sample of Japanese men with a high smoking burden, but low atherosclerotic risk, 6-8 we aimed to crosssectionally investigate the influence of smoking on these 4 anatomically distinct vascular beds, using coronary artery calcification (CAC), carotid intima-media thickness (CIMT) and plaque, aortic artery calcification (AoAC), and ankle-brachial index (ABI), by examining the following: (1) the strength of association between smoking status and each vascular bed; (2) whether there are dose-response relationships with cumulative smoking exposure by pack-years and with smoking intensity by daily cigarette consumption in these associations; and (3) whether these associations attenuate with length of time since smoking cessation in former smokers. Study Participants and Measurements The Shiga Epidemiological Study of Subclinical Atherosclerosis (SESSA) is an ongoing prospective, population-based study of a random sample from a general Japanese population as described elsewhere. 9,10 Participants eligible for the present study were 1094 men aged 40 to 79 years enrolled at baseline (May 2006-March 2008 in the SESSA. After excluding 75 participants with a history of myocardial infarction or stroke (n=66) and with missing information (n=9), including variables related to smoking, a total of 1019 participants were analyzed in the present study. Of the 1019 participants, 987 underwent carotid ultrasound examination to assess CIMT and plaque. The present study was approved by the Institutional Review Board of Shiga University of Medical Science (No. 17-19, 17-83; Otsu, Japan), and all participants provided written informed consent. A self-administered questionnaire was used to obtain information on demography, smoking habits, alcohol drinking, physical activity, socioeconomic status, medication use (hypertension, dyslipidemia, and diabetes mellitus), and medical history. After the participants completed the questionnaires, trained nurses confirmed them with the participants. Smoking status was categorized into 3 groups: current, former, and never smokers. Participants who smoked in the last 30 days were defined as current smokers, whereas participants who had never smoked before were defined as never smokers. Based on the information, daily cigarette consumption was calculated in former and current smokers. Pack-years were estimated by multiplying the average number of packs smoked daily by the number of smoking years. Time since smoking cessation was calculated by subtracting age at cessation from age at the baseline survey. Body mass index (BMI) was calculated as weight (kg) divided by height squared (m 2 ). Using an automated sphygmomanometer (BP-8800; Omron Health Care Co. Ltd, Tokyo, Japan), the mean of 2 consecutive measurements on the right arm with participants in a seated position after a 5-minute rest were used to determine blood pressure. Diabetes mellitus was defined as a hemoglobin A1c ≥6.1% (per the Japan Diabetes Society protocol; equivalent to ≥6.5% in the National Glycohemoglobin Standardization Program), 11 a fasting blood glucose ≥126 mg/dL, or the use of antidiabetic medications. Total cholesterol and triglycerides were measured using enzymatic assays, and high-density lipoprotein (HDL) cholesterol (HDL-C) was determined using a direct method. Lowdensity lipoprotein cholesterol was estimated using Friedewald's formula 12 in the 1003 participants with triglycerides <400 mg/dL. Lipid measurements were standardized according to the protocol for the US Centers for Disease Control and Prevention/Cholesterol Reference Method Laboratory Network. C-reactive protein (CRP) was measured by nephelometry using a BN II Analyzer with a threshold of 0.16 mg/L. Alcohol intake was estimated as the total weekly amount of alcohol consumption (g/week). Participants who exercised were defined as those who regularly did either brisk walking or more-active exercise ≥1 h/week. 13,14 Assessment of Subclinical Atherosclerosis We assessed CAC and AoAC either by electron-beam computed tomography (EBCT) using a C-150 scanner (Imatron, South San Francisco, CA) or 16-channel multidetectorrow computed tomography (MDCT) using an Aquilion scanner (Toshiba, Tokyo, Japan). 9,15 EBCT and MDCT images accounted for 69.8% and 30.2%, respectively, of coronary and aortic scans. Images were obtained from the level of the aortic root through the heart every 3-mm slice to evaluate CAC and from the aortic arch to the iliac bifurcation every 6-mm slice to evaluate AoAC, with a scan time of 100 (EBCT) or 320 ms (MDCT). Presence of CAC and AoAC was defined as a minimum of 3 contiguous pixels (area=1 mm 2 ) with a density of ≥130 HU using the AccuImage software (AccuImage Diagnostics, South San Francisco, CA). One trained physician interpreted coronary and aortic scanning using the widely accepted Agatston method. 16 The physician was blinded to the participants' characteristics. Protocols for CAC and AoAC measurement were developed from a separate cohort study by our research group, 17 in which the reproducibility of coronary and aortic scans was very high (intraclass correlations of 0.99 and 0.98, respectively). 18 Because a stratified analysis by computed tomography (CT) type showed similar results, images by EBCT and MDCT were considered equivalent. Other studies have also reported comparable findings for CAC and AoAC assessment by EBCT and MDCT. 19,20 As previously described, 10,21 high-resolution B-mode ultrasound (Xario-660A; Toshiba Medical Systems, Tokyo, Japan) was used to scan both right and left carotid arteries with a 7.5-MHz probe according to a standardized method established by the Ultrasound Research Laboratory of University of Pittsburgh (Pittsburgh, PA). 22 Images from the following segments were digitized: both near and far walls of common carotid arteries (CCA; 1 cm proximal to the carotid bulb); far wall of the bulb; and far wall of the internal carotid arteries (ICAs; 1 cm distal to the bulb). Intima-media thickness in each image was traced with an automatic image-reading program (AMS; Chalmers University of Technology, Gothenburg, Sweden). Mean CIMT comprised the mean of all averages across the 8 locations (4 in each artery) from CCAs, bulb, and ICAs. 10,21 Carotid plaque was defined as a focal thickening lesion (≥50% protrusion compared to adjacent areas) with CIMT >1 mm. 21 The total number of plaques in CCAs, bulb, and ICAs on both left and right sides was counted. ABI was estimated separately for each leg, with the numerator being the higher of the dorsalis pedis or posterior tibial systolic pressures in each leg and the denominator being the higher of the right versus left brachial systolic pressures, using an automatic waveform analyzer (Form I PWV/ABI; Omron Health Care Co. Ltd, Tokyo, Japan), while participants were in a supine position after a 5-minute rest. 10 The select ABI for participants was the smaller of the right versus left ABI. Statistical Analysis Characteristics were analyzed according to smoking status. Differences in characteristics were evaluated using the ANOVA, v 2 test, or Kruskal-Wallis test. All outcomes were analyzed in a cross-sectional fashion. For dichotomous outcomes, including CAC, AoAC, CIMT, and ABI, logistic regression was used. Cut-off points for CAC were defined as CAC score >0, ≥100, and ≥400 according to their clinical significance and consistency with previous studies. 23,24 Similarly, AoAC was defined as AoAC score >0, ≥100, 18,25,26 and ≥1000; CIMT was defined as CIMT >1.0 mm; and ABI was defined as ABI <1.1. [27][28][29] Because of the low prevalence of ABI <1.0 (n=44 [4%]), this cut-off point was not chosen. For carotid plaque, because the distribution showed overdispersion, negative binomial regression was used. In all analyses, the adjusted model included age, BMI, systolic blood pressure, total cholesterol, HDL-C, medication for hypertension and dyslipidemia (yes/no), diabetes mellitus (yes/no), alcohol intake (g/week), exercise (yes/no), and CRP. CT type was further included concomitantly when CAC and AoAC were analyzed. Further adjustment for occupation status and education years did not substantially affect the findings, and therefore these variables were not included in the model. For cumulative exposure or intensity analyses, the associations of measures of subclinical atherosclerosis with tertiles of pack-years or daily cigarette consumption were estimated in both former and current smokers compared to never smokers. For smoking cessation analyses, the association of subclinical atherosclerosis with tertiles of time since cessation was estimated in former smokers compared to current smokers. Similarly, cessation analyses were conducted compared to never smokers. As a sensitivity analysis, daily cigarette consumption was taken into account in the adjusted model for smoking cessation analyses. Tertiles were used for pack-years, daily cigarette consumption, and time since cessation because (1) there are no established cut-off points for these measures; (2) these measures were not normally distributed (their distribution were likely to be right-skewed); and (3) a sufficient sample size was obtained in each group. Tests for trend across categories were also based on assigning median value for each category of pack-years, cigarettes smoked daily, and years since quitting and modeling this variable as a continuous variable. Analyses were performed using SPSS (version 22.0; SPSS, Inc., Chicago, IL) and SAS software (version 9.4; SAS Institute Inc., Cary, NC). Two-tailed P values of <0.05 were considered statistically significant. Results Of the 1019 participants, 329 (32.3%) were current, 509 (50.0%) were former, and 181 (17.7%) were never smokers. Characteristics based on smoking status are shown in Table 1. Current smokers were younger and had lower levels of systolic blood pressure and HDL-C, a higher level of CRP, less medication for hypertension, higher alcohol intake, less exercise, and a lower unemployment rate than former or never smokers. Graded associations were observed between cumulative smoking exposure by pack-years and subclinical atherosclerosis, including CIMT, carotid plaque, CAC >0, ABI, and all indices of AoAC in current smokers (all P for trend <0.05; Table 2), whereas increases in ORs for CAC ≥100 and ≥400 were not statistically significant with higher pack-year cumulative exposure among current smokers. Similar results were observed among former smokers (Table S1). Even the lowest pack-years among current smokers was also significantly associated with CAC ≥100, ≥400, and all indices of AoAC as compared to never smokers ( Table 2). Similar findings were observed in smoking intensity analyses by daily cigarette consumption in both current and former smokers (Table S2). Compared to current smokers, burdens of subclinical atherosclerosis in all 4 circulations linearly decreased among former smokers with length of time since cessation (all P for trend, <0.05; Table 3). These reductions became significant at ≥10.4 cessation years for CAC >0, AoAC >0 and ≥100, and at ≥24.4 cessation years for carotid plaque, CAC ≥100, ≥400, ABI, and AoAC ≥1000. After further adjustment for daily cigarette consumption in addition to the adjusted model, the associations of time since cessation with atherosclerotic (15,25) burdens in all 4 circulations were somewhat attenuated, but still statistically significant except for that with CIMT (Table S3). Additionally, by 24.4 years for CIMT, carotid plaque, and AoAC, by 10.4 years for ABI, and even less than 10.4 years for CAC, ORs or ratios of expected counts were not statistically different for former smokers compared to never smokers (Table S4). Discussion In this community-based, cross-sectional study of Japanese men without apparent CVD, of which about 30% are current and 50% are former smokers, significant associations were observed between current smoking and subclinical atherosclerosis in all 4 vascular beds, including coronary, carotid, aortic, and peripheral arteries, and between former smoking and that in carotid and aortic arteries. These associations, particularly with carotid, aortic, and peripheral atherosclerosis, also became stronger with increasing cumulative exposure and intensity of smoking in both former and current smokers compared to never smokers. Even the lowest amount of pack-years or cigarettes smoked daily among current smokers was associated with CAC, particularly CAC ≥100 or ≥400, and AoAC. Furthermore, longer time since smoking cessation was associated with lower degrees of atherosclerosis in all 4 circulations in former smokers, with somewhat differences among anatomically distinct sites. Our results support past demonstrations with respect to the adverse effect of smoking in multiple vascular beds, including CAC. McEvoy et al showed that among participants aged 45-84 years with a smoking rate of 14% and a cessation rate of 38% in the Multi-Ethnic Study of Atherosclerosis (MESA), current and former smoking were associated with subclinical atherosclerosis at 3 anatomically distinct sites, including CAC, CIMT, and ABI. 5 A significant relationship of current or former smoking to CAC has also been previously reported from other cohorts. 4,[30][31][32] Similarly, CIMT [33][34][35][36] and AoAC 18 findings associated with smoking have been confirmed in previous studies. Our findings of CAC, CIMT, AoAC, and ABI related to smoking in a Japanese population in addition to these pieces of past evidence in the Western ones indicate that both current and former smoking is strongly associated with subclinical atherosclerosis in multiple vascular beds. The present study revealed significant dose-response relationships between cumulative exposure and intensity of smoking and carotid, aortic, and peripheral atherosclerosis in both current and former smokers. MESA examined the relationships, resulting in almost null associations of measures of subclinical atherosclerosis with pack-years smoking, particularly in current smokers. 5 This inconsistency between studies may be explained by the fact that the relative impact of cumulative smoking exposure for atherosclerosis was not evaluated, compared to never smokers, in MESA. 5 Another reason for the discrepancy between studies may be because of the difference in the study sample demographics, such as smoking rate, pack-years exposure, and atherosclerotic distribution. For example, cumulative pack-years exposure among current smokers in the present study seems to be higher than that in MESA. 5 The higher pack-years exposure in the present study could contribute to increased risks of atherosclerosis among current smokers, particularly in the highest group of pack-years. For the present study, significant dose-response relationships were not also confirmed in the associations particularly with CAC, although almost all ORs for CAC among current smokers were statistically significant even in the lowest groups of cumulative exposure and intensity, suggesting that cumulative exposure and intensity of smoking may not be important, but current smoking behavior itself may be harmful for coronary atherosclerosis. To our knowledge, only 2 studies, including our study, 5 have investigated the dose-response relationship with cumulative pack-years exposure for atherosclerosis, including CAC, whereas, according to another MESA report, 37 cumulative pack-years among current smokers was an important determinant of CVD. Therefore, further research is necessary to confirm these associations in other populations and identify mechanisms underlying these inconsistencies among studies. The present study is one of the first to comprehensively reveal the association between smoking cessation and subclinical atherosclerosis, including all 4 vascular beds. The associations of smoking with CAC, CIMT, and ABI decreased with time since quitting in former smokers in MESA. 5 A similar result between smoking cessation and CAC was also observed in a German population-based study. 4 Given that the present study is cross-sectional, our findings of lower odds for subclinical atherosclerosis with longer time since quitting do not represent that these risks are decreasing over time in former smokers, rather that the accumulation of these markers slows down after smoking cessation. Thus, a shorter interval from cessation and a higher pack-year total could be associated with increased atherosclerotic burdens in former smokers. Additionally, the present analyses, which were performed on time since cessation, do not address how long after quitting smoking cardiovascular health returns to the state of never smokers. Importantly, however, we extend the past knowledge by demonstrating linear reductions on atherosclerotic burdens in all 4 vascular beds with length of time since smoking cessation, even though with some differences among vascular sites and with slight attenuation after further adjusting for amount of smoking. Our study has several limitations. The study design was cross-sectional. Therefore, causal and longitudinal relationships were not addressed. However, cumulative pack-years exposure and time since cessation themselves provide information in the past to present; thus, these time frames would support the causality. Also, smoking parameters were based on self-report, rather than an objective marker, such as cotinine, which could lead to the underestimation of the true associations. Third, although we carefully controlled for the major known confounders, our findings may, in part, be explained by differences in unknown confounders. Finally, because only Japanese men were included for analyses, our results are restricted to men of a single ethnic group. However, population homogeneity reduces possible confounding from cultural and environmental variation. In conclusion, in a community-based sample of Japanese men without clinical CVD, the present study demonstrated strong associations with subclinical atherosclerosis in multiple vascular beds among both current and former smokers, with dose-dependent relationships with cumulative exposure or intensity. These harmful associations also attenuated with time since smoking cessation. The subclinical atherosclerosis in carotid, coronary, aortic, and peripheral arteries are all strong predictors for CVD. [38][39][40][41] Therefore, the new findings in Japanese men, with high rates of smoking and smoking cessation, support evidence for the negative impact of smoking and benefit for CVD prevention by smoking cessation as early as possible and provide important implications for the tobacco regulatory science on cardiovascular health worldwide. The reference category is current smokers. Cessation interval groups were categorized according to tertiles of years since smoking cessation. All abbreviations are shown in Table 2. All values are expressed as ORs and ratios of expected counts with 95% confidence intervals. The statistical analyses are shown in Table 2. Adjusted covariates included age, body mass index, systolic blood pressure, total cholesterol, high-density lipoprotein cholesterol, medication for hypertension and dyslipidemia (yes/no), diabetes mellitus (yes/no), alcohol intake (g/week), exercise (yes/no), C-reactive protein, and daily cigarette consumption (for current smokers, daily amount of smoking; for former smokers, daily amount of prior smoking). The CT type was further included concomitantly when CAC and AoAC were analyzed. The reference category is never smokers. Cessation interval groups were categorized according to tertiles of years since smoking cessation. All abbreviations are shown in Table 2. All values are expressed as ORs and ratios of expected counts with 95% confidence intervals. For dichotomous outcomes, logistic regression was used. For carotid plaque, negative binomial regression was used. Adjusted covariates are shown in
2017-07-30T18:59:47.180Z
2016-08-29T00:00:00.000
{ "year": 2016, "sha1": "d67b186fc076138627e19143c8411f4c7ec936be", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1161/jaha.116.003738", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "018a13564df7dd8fd9f867c11d287e184cd7ef17", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
233399689
pes2o/s2orc
v3-fos-license
Does Probe-Tube Verification of Real-Ear Hearing Aid Amplification Characteristics Improve Outcomes in Adults? A Systematic Review and Meta-Analysis This systematic review, the first on this topic, aimed to investigate if probe-tube verification of real-ear hearing aid amplification characteristics improves outcomes in adults. The review was preregistered in the Prospective Register of Systematic Reviews and performed in accordance with the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses. After assessing more than 1,420 records from seven databases, six experimental studies (published between 2012 and 2019) met the inclusion criteria; five were included in the meta-analyses. The primary outcome of interest (hearing-specific, health-related quality of life) was not reported in any study. There were moderate and statistically significant positive effects of probe-tube real-ear measurement (REM), compared with the manufacturer’s initial fit, on speech intelligibility in quiet settings (standardized mean difference [SMD]: 0.59) and user’s final preference (proportion difference: 52.2%). There were small but statistically significant positive effects of REM on self-reported listening abilities (SMD: 0.22) and speech intelligibility in noise (SMD: 0.15). The quality of evidence for these outcomes ranged from high to very low. The findings show that REMs improve outcomes statistically, but this is based on a small number of studies and a limited number of participants. It is currently unclear if the benefits are of material importance because minimum clinically important differences have not been established for most of the outcomes. Ultimately, there needs to be a cost-effectiveness analysis to show that statistically significant benefits, which exceed the minimum clinically important difference, are worth the cost involved. . The primary intervention for permanent hearing loss is acoustic hearing aids (Kochkin, 2009). These devices are designed to restore audibility of lowlevel sounds, maximize intelligibility of conversational level speech, and maintain comfort for loud sounds (Dillon, 2012;Mueller, 2005;Ricketts et al., 2019). Hearing aids are effective at improving hearing-related quality of life in adults with mild and moderate hearing loss (Ferguson et al., 2017). The amplification characteristics for each hearing aid user are specified according to prescription formulae (e.g., National Acoustic Laboratories Non-Linear 2 and Desired Sensation Level Version 5; Keidser et al., 2011;Scollie et al., 2005). Hearing aid fitting software can approximate the prescription characteristics, sometimes known as initial fit settings. Alternatively, real-ear measurement (REM) involves placing a probe-tube microphone into the ear canal and is used to verify that the real-ear output of the hearing aid matches the prescription target. Numerous studies have shown that the initial-fit settings can significantly deviate from prescription targets (Aarts & Caffee, 2005;Aazh & Moore, 2007;Munro et al., 2016). These studies have also shown that REMs improve the match to the prescription targets. REMs have been endorsed by hearing professional societies (e.g., the American Academy of Audiology Task Force Committee, 2006; the British Society of Audiology, 2018). Nevertheless, it remains unclear if the improved match to prescription targets result in better patient outcomes. Determining the effectiveness of REMs is important to decision-makers and stakeholders. Using REMs requires additional equipment, space, and consumables. Also, the typical UK National Health Service prescription and fitting appointment takes 60 min (British Academy of Audiology, 2016), seven of which is required for REMs (Folkeard et al., 2018), and this could otherwise be used for counselling. The findings have implications for the emerging category of overthe-counter (sometimes called direct-to-consumer) hearing aids for which the use of REM is not easily possible. If REMs do not result in a better patient outcome, a potential obstacle to over-the-counters and self-fitting hearing aids is overcome. The objective of this review was to systematically evaluate the evidence on whether the use of REMs to match the hearing aid's amplification characteristics to a validated prescription target improves outcomes in adult hearing aid users. Methods The protocol for this systematic review was preregistered with the International Prospective Register of Systematic Reviews (PROSPERO; CRD42020166074) and published in BMJ Open (Almufarrij et al., 2020). The systematic review's method was reported in accordance with Preferred Reporting Items for Systematic review and Meta-Analyses (PRISMA) guidelines (Shamseer et al., 2015). Eligibility Criteria The inclusion and exclusion criteria for studies were structured in accordance with participants, interventions, comparators, outcomes, and study designs (PICOS) elements. Participants. Adults (18 years old) with any specified degree of sensorineural or mixed hearing loss. Studies that report only a qualitative description of age and threshold of hearing were also included. No studies were identified, and excluded, because of conductive or fluctuating hearing loss. Interventions. Conventional acoustic hearing aids, intended to be fitted by a qualified hearing professional, were programmed to a prescription target using a REM system. Assistive listening devices, hearables, personal sound amplification products, and direct-to-consumer hearing devices were excluded. Implantable devices (e.g., cochlear implants), bone conduction hearing aids, or contralateral routing of sounds hearing aids were also excluded. Comparators. Hearing aids were programmed to the manufacturers' approximation of a response appropriate to the wearers hearing loss without verification with REMs (i.e., initial-fit approach). Just as occurs in practice, the prescription that was being approximated could be a validated, published prescription (as typically used for REM fittings), a manufacturer's intentional variation of a published prescription, or a manufacturer's proprietary prescription. In this review, the term initial fit will be used to refer to the comparator. Outcomes. The primary outcome of interest was hearingspecific, health-related quality of life (e.g., Hearing Handicap Inventory for the Elderly; Ventry & Weinstein, 1982). Secondary outcomes of interest were self-reported listening ability (e.g., abbreviated profile of hearing aid benefit [APHAB]; Cox & Alexander, 1995), composite self-report measures (e.g., International Outcomes Inventory for Hearing Aids; Cox & Alexander, 2002), speech recognition in quiet or noisy settings, generic health-related quality of life, hours of hearing aid use per day, sound quality, preference, number of required follow-up care sessions (i.e., for further fine-tuning), and adverse events (e.g., noise-induced hearing loss). Study Designs. Randomized and non-randomized controlled trials were included. Case reports, conference abstracts, book chapters, dissertations, theses, reviews, and clinical guidelines were excluded. However, two gray literature papers (Amlani et al., 2017;Leavitt & Flexer, 2012) were known about and are briefly considered in the discussion section. Information Sources Studies were identified using a systematic search strategy of the following databases: Cochrane Library, Embase (via OVID), Emcare (via OVID), MEDLINE (via OVID), PsycINFO (via EBSCOhost), PubMed, and Web of Science. No search restrictions were applied in terms of the publication's language, status, and year. The reference lists of the included publications were manually scanned to identify further studies. Using Google Scholar "cited by" feature, publications that have cited any of the included studies were screened to identify additional relevant articles. All searches were performed on January 20, 2020. Search Strategy The search protocol and methods were developed by a medical information specialist from Systematic Review Solutions Limited. The search terms were based on experts' opinion, free text, and controlled terms from Medical Subject Headings (MeSH), Excerpta Medica Tree (EMTREE), and Cumulative Index of Nursing and Allied Health Literature (CINAHL) headings. The search strategies for all databases are reported in the Supplementary Material 1. Data Management Search result, including title, authors' detail, publication year, publication journal, and abstract, were extracted to EndNote X9 Reference Management software. The same software was used to remove any duplicates prior to the initial screening. Next, one author (I. A.) exported the title and abstracts of all identified articles into an Excel spreadsheet so that they could be easily screened against the eligibility criteria. The reason for any article's exclusion was documented. Each article was assigned a unique number that was linked to the full details of the article. Selection Process The title and abstract of all identified studies were screened independently by two authors (I. A. and K. J. M.) to determine eligibility for inclusion. A more detailed inspection was used when there was a discrepancy between the two investigators; this included assessing the full article. In this screening stage, there were discrepancies, which occurred in 1.4% of cases (resolved by discussion). The full text was retained and inspected by I. A. and K. J. M. for all articles that matched the inclusion criteria. There was complete agreement between the two full-text inspectors. Following PRISMA recommendations (Moher et al., 2009), a flow diagram was used to present the study selection process. Data Collection Process and Data Items Data from the eligible studies were extracted by I. A. and verified by K. J. M. to check for consistency. There was complete agreement between the two data extractors. The data were extracted into a predesigned data extraction form adapted from the Cochrane handbook (Higgins & Green, 2008). The extracted data comprised of authors (year), methods, participants, intervention, and outcomes. Data presented on graphical forms were extracted using an online extraction tool (WebPlotDigitizer; https://automeris.io/ WebPlotDigitizer) when necessary. Risk of Bias in Individual Studies The assessment of the risk of bias was conducted independently by all three authors. Disagreements, which occurred in 12% of cases, were resolved using a majority decision. Given the limited number of randomized controlled trials in the field of audiology, it was anticipated that most of the extracted studies would be nonrandomized controlled trials; therefore, the Downs and Black (1998) checklist was used because it is easy to administer, has well-established validity and reliability, and can be used to assess the methodological quality of both randomized and non-randomized studies. Because knowledge of the minimum clinically important differences in hearing aid outcomes is lacking, scoring for the final item (number 27) was modified based on whether or not a power calculation was performed. That is, one point was awarded if a power calculation was conducted and zero points if it was not. Consequently, the maximum score was 28 (instead of the original scoring of 32). Articles scoring 26-28, 20-25, 15-19, and <14 were regarded as having excellent, good, fair, and poor quality, respectively (Hooper et al., 2008). Data Analysis A meta-analysis was conducted for each outcome using Review Manager 5.3. As some of the studies used different continuous outcomes, the standardized mean difference (SMD; mean difference between conditions divided by the pooled standard deviation [SD]) was computed along with its 95% confidence interval (CI). The formulae used are reported in Supplementary Material 2. For studies that used more than one measure to assess the same outcome (e.g., speech intelligibility tests at different input levels), the findings were averaged and pooled in the meta-analyses. If the statistical heterogeneity across studies was identified as low, fixed-effect meta-analyses were computed; otherwise, a randomeffect meta-analysis was calculated. For each metaanalysis, the estimated effect size was calculated using generic inverse of variance weighting. The effect estimate was reported along with its 95% CI. Forest plots were used to present these results. Asymmetrical distribution of continuous outcomes (i.e., skewed data) was assessed by subtracting the lowest possible value from the mean and then dividing the result by the SD. A ratio below 2 or 1 either suggests or indicates a skewed distribution, respectively (Deeks et al., 2019). Skewed data were nonlinearly transformed (using an arcsine transformation) to better approximate a normal distribution. All statistical tests were performed at .05 alpha level. Subgroup Analysis Plausible sources of heterogeneity were explored using unplanned subgroup analyses of studies that used the same or different prescription formulae for the intervention and control conditions. Assessment of Reporting Bias Publication bias is well known in science in general, and in medicine and health care in particular (Kyzas et al., 2007;Turner et al., 2008;Tzoulaki et al., 2013). Although the authors intended to check for publication bias using a funnel plot of the precision (standard error) as a function of intervention effect estimates, this was not possible because fewer than ten studies reported each outcome (Sterne et al., 2011). Assessment of Heterogeneity The percentage of variability between studies' outcomes, which is due to heterogeneity rather than random error, was computed in Review Manager 5.3 using an I 2 statistic. Given that the absolute threshold of I 2 can be misleading, the results were interpreted as low (0-40%), medium (41-60%), or high (61-100%) heterogeneity. Dealing With Missing Data The authors were contacted if any of the data were missing. If SDs were missing and could not be obtained from the authors, they were inferred from the available data (e.g., 95% CI or standard errors). Missing correlation coefficients between interventions, which is required to precisely calculate CIs around the effect sizes for within-subject design studies (the design used in all included studies), was estimated from the other included studies (i.e., the average correlation coefficients). Confidence in Cumulative Estimate The quality of evidence for each outcome measure was rated as high, moderate, low, or very low using the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) tool (Atkins et al., 2004). The GRADE tool takes into account five principal domains: study limitations (e.g., blinding and allocation concealment), inconsistency (e.g., no overlap of CIs between studies), indirectness (e.g., difference between the study sample and the population of interest), imprecision (e.g., broad CI), and publication bias (e.g., selective reporting of positive outcomes). Randomized controlled trials without serious shortcomings were, in principle, rated as high-quality evidence (i.e., our confidence level is high enough to conclude that the true effect is close to the estimated effect). Crossover designs, where each participant acted as their own control and the order of trialing intervention and comparator was counterbalanced, were regarded as equal in quality to randomized controlled trials in which each participant was assigned to only one arm. However, the assigned rating was subject to downgrading by either one or two points on the basis of the seriousness of the aforementioned assessment domains. If the review team were in a borderline situation (referred to a "close-call situation" in GRADE) regarding two quality issues, the quality of evidence was downgraded by one point (Schu¨nemann et al., 2013). A thorough discussion of these factors can be found in Schu¨nemann et al. (2013). The three authors carried out the assessment independently and disagreements were resolved by discussion. The GRADEpro online platform (https://grade pro.org/) was used to develop the summary of findings table (see later). Search and Selection of Studies The selection process is shown in the PRISMA flow diagram ( Figure 1). The database search identified 2,243 records, of which 1,420 duplicate records were removed. The titles and abstracts of the remaining 823 articles were screened, and 811 articles were discarded because they did not meet the eligibility criteria. The full texts of the remaining 12 articles were retrieved for further assessment. Of these, eight were removed because the intervention and comparator used were irrelevant to the review question (e.g., Humes et al.'s, 2017, article was excluded because the hearing aids for the consumer decides group were preprogrammed to match the NAL-NL2 targets in a 2-cc coupler). An additional 269 articles were identified by reference and citation checking. The titles and abstracts of these articles were screened against the inclusion and exclusion criteria, and the full texts of two non-English language studies (Persian and Korean) were retrieved. These two studies were translated and included in this review. Therefore, six studies were included in the review for data extraction. The details of the search resources and the number of the identified studies are reported in Supplementary Material 3. Table 1 summarizes the participant characteristics, study design, and outcomes reported in the six studies. All studies were interventional crossover designs and were published between 2012 and 2019. The timing of outcome data varied from the day of fitting (Chang et al., 2018) to 6 weeks postfitting (e.g., Abrams et al., 2012). Three studies (Denys et al., 2019;Karimi et al., 2016;Valente et al., 2018) were conducted in university clinics, in Belgium, Iran, and the United States, respectively. Of the remaining studies, one was based in a veterans' clinic in the United States (Abrams et al., 2012), one in audiology clinics in the Netherlands (Boymans & Dreschler, 2012), and one in an audiology clinic at a tertiary hospital in Korea (Chang et al., 2018). Participant Characteristics Age. The mean participant age in each of the studies was 50 years or older, with two exceptions: Denys et al. (2019) and Karimi et al. (2016) recruited participants with mean age, 43 and 42 years, respectively. Sex. The sex distribution was reported in all but one of the studies (Denys et al., 2019). Four studies recruited male and female participants, and the remaining study (Abrams et al., 2012) recruited only male participants. In general, the number of male participants in the studies was twice as high as female participants. Severity and Types of Hearing Loss. The type of hearing loss was reported in all but one study (Abrams et al., 2012). Of those reporting this, four mainly had participants with sensorineural hearing loss, and one (Boymans & Dreschler, 2012) involved participants with mixed and sensorineural hearing loss. The degree of hearing loss varied considerably across participants. Generally, the mean hearing thresholds were within the mild and moderate range (i.e., 21-70 dB HL). The SDs and ranges of mean hearing thresholds revealed that some studies included participants with mild to severe hearing loss (i.e., 21-95 dB HL), but none included participants with profound loss (i.e., >95 dB HL). Hearing Aid Experience. The level of experience with hearing aids was reported in all studies as either a binary variable (experienced vs. first time) or the length of hearing aid experience in months. Four studies mainly involved experienced users (Abrams et al., 2012;Chang et al., 2018;Denys et al., 2019;Karimi et al., 2016), one study used first-time users (Valente et al., 2018), and one involved a mix of experienced and first-time users (Boymans & Dreschler, 2012). Of the studies who used experienced users, two reported the length of participants' experience in months (Chang et al., 2018;Denys et al., 2019), which ranged from 2 to 222 months. Prescription Formulae. Four of the studies used the same prescription formula (either NAL-NL1 or 2) for both fitting approaches. The remaining two (Boymans & The average deviation from the prescribed target at 0.5, 1, 2, and 4 kHz for an input level of 65 dB SPL. b The average deviation from the prescribed target at 0.5, 1, 2, and 4 kHz for three different input levels. Dreschler, 2012; Valente et al., 2018) used the hearing aid manufacturer's prescription formula for the initial fit and a generic prescription formula (NAL-NL1 or 2) for REM. Of the two studies that used the manufacturer's prescription formula for the initial fit, only one (Boymans & Dreschler, 2012) allowed all participants to adjust the hearing aid gain based on their subjective feedback, which they did using Amplifit II software. Outcomes The primary outcome (hearing-specific, health-related quality of life) was not reported in any study. Selfreported listening ability and speech intelligibility in quiet and noise were the only reported secondary outcomes. Two additional outcomes (i.e., preference and sound quality) were reported in some of the studies; hence, these two outcomes were added to this review. Self-Reported Listening Ability. Self-reported listening ability was reported in four studies but using different outcome instruments. The APHAB (Cox & Alexander, 1995) was used in two studies (Abrams et al., 2012;Valente et al., 2018). Valente et al. (2018) and Boymans and Dreschler (2012) Hearing Aid Fittings; however, this instrument was not included here because its psychometric properties have not been validated. Figure 2 shows the forest plot for self-reported listening ability. Two studies used the same prescription formula in the two conditions, and the improvement was statistically significant in favor of REM fitting (SMD ¼ 0.22, 95% CI [0.05, 0.39]; Abrams et al., 2012;Denys et al., 2019). Two studies used different prescription formulae, and, again, the improvement was statistically significant in favor of the REM fitting (SMD ¼ 0. Figure 2). The results were analyzed using the fixed-effect model because the observed heterogeneity was very low. A subgroup analysis (same prescription formula with mainly experienced hearing aid users vs. different prescriptions with mainly first-time users) showed no statistically significant subgroup effect (p ¼ .98). Speech Intelligibility in Quiet. The studies were considerably varied in terms of the stimulus used (i.e., words or sentences), the presentation level of the stimulus (i.e., at threshold or suprathreshold levels), the assessment methods (i.e., single or multiple loudspeakers), and the Figure 2. Forest Plot Comparing Self-Reported Listening Ability With REM Fitting Versus Initial Fit Using Fixed-Effects Meta-Analysis. The size of the square denotes the weight of each study, and the whiskers represent the 95% confidence interval around the effect size. Diamonds represent the pooled effect size and its 95% confidence interval. SMD ¼ standardized mean difference; SE ¼ standard error; IV ¼ inverse of variance; CI ¼ confidence interval; REM ¼ real-ear measurements. scoring procedure (i.e., phoneme, keywords, or sentences). The parameters used for speech intelligibility testing are reported in Supplementary Material 4. Figure 3 shows the forest plot for speech intelligibility in quiet settings. Two studies used the same prescription formula in the two conditions, and the improvement with REM fitting was not statistically significant (SMD ¼ 0.47, 95% CI [1.18, -0.23]; Chang et al., 2018;Denys et al., 2019). Two studies used different prescription formulae, and the improvement was statistically significant in favor of the REM fitting ( Figure 3). The results were synthesized using random-effect meta-analysis because the observed heterogeneity was high. A subgroup analysis of studies using the same or different prescription formulae in the intervention and control conditions showed no statistically significant subgroup effect (p ¼ .58). Speech Intelligibility in Noise. Figure 4 shows the forest plot for speech intelligibility in noise. Two studies used the same prescription formula in the two conditions, and the improvement with REM fitting was not statistically significant (SMD ¼ 0.14, 95% CI [0.30, -0 Figure 4). The results were synthesized using fixed-effect meta-analysis because the observed heterogeneity was low. A subgroup analysis of studies using the same or different prescription formulae in the intervention and control conditions showed no statistically significant subgroup effect (p ¼ .84). Sound Quality. Figure 5 shows the forest plot for sound quality. One study used the same prescription formula for both conditions, and the improvement was statistically significant in favor of REM fitting (SMD ¼ 0.88, 95% CI [0.43, 1.33]; Chang et al., 2018). A quite similar pattern was found for the other study that used different prescription formulae for both conditions (SMD ¼ 0.21, 95% CI [0.03, 0.39]; Boymans & Dreschler, 2012). Despite the mean differences for each study being significantly different from zero, the pooled effect size was not significantly different from zero (SMD ¼ 0.51, p ¼ .12, 95% CI [-0.14, 1.16]; I 2 ¼ 86%, p ¼ .007; Figure 5). This is due to the large difference in effect size between the two studies. The results were synthesized using random-effect meta-analysis because the observed heterogeneity was high. A subgroup analysis of studies using the same or different prescription formulae in the Figure 3. Forest Plot Comparing Speech Intelligibility in Quiet Settings With REM Fitting Versus Initial Fit Using Random-Effects Meta-Analysis. The size of the square denotes the weight of each study, and the whiskers represent the 95% confidence interval around the effect size. Diamonds represent the pooled effect size and its 95% confidence interval. SMD ¼ standardized mean difference; SE ¼ standard error; IV ¼ inverse of variance; CI ¼ confidence interval; REM ¼ real-ear measurements. intervention and control conditions showed a statistically significant subgroup effect (p ¼ .007). Preference. Figure 6 shows the forest plot for users' final preferences. One study used the same prescription formula for both conditions, and the proportion of those who preferred REM fitting was not significantly higher than those who preferred initial fit (proportion difference ¼ 36%, 95% CI [-2.6%, 75%]; Abrams et al., 2012). The effect for the two studies that used The size of the square denotes the weight of each study, and the whiskers represent the 95% confidence interval around the effect size. Diamonds represent the pooled effect size and its 95% confidence interval. SMD ¼ standardized mean difference; SE ¼ standard error; IV ¼ inverse of variance; CI ¼ confidence interval; REM ¼ real-ear measurements. Figure 5. Forest Plot Comparing Sound Quality With REM Fitting Versus Initial Fit Using Random-Effects Meta-Analysis. The size of the square denotes the weight of each study, and the whiskers represent the 95% confidence interval around the effect size. Diamonds represent the pooled effect size and its 95% confidence interval. SMD ¼ standardized mean difference; SE ¼ standard error; IV ¼ inverse of variance; CI ¼ confidence interval; REM ¼ real-ear measurements. different prescription formulae was statistically significant (proportion difference ¼ 54%, 95% CI [38%, 71%]; Boymans & Dreschler, 2012;Valente et al., 2018). Collectively, the three studies (119 participants) show that the proportion of those who preferred REM fitting was significantly higher than those who preferred initial fit (proportion difference ¼ 52.2%, p <.00001, 95% CI [37%, 67%]; I 2 ¼ 0%, p ¼ .41; Figure 6). The results were synthesized using fixed-effect meta-analysis because the observed heterogeneity was low. A subgroup analysis of studies using the same or different prescription formulae in the intervention and control conditions showed no statistically significant subgroup effect (p ¼ .39). The robustness of the pooled preference estimates was cross-checked using arcsine-transformed scores, resulting in essentially the same outcome. Table 2 shows the scores on the Downs and Black checklist for each study. The maximum possible quality score was 28. The scores range from 16 (Karimi et al., 2016) to 23 (Abrams et al., 2012), indicating that the quality of the studies is within the range of fair to good. In general, studies had high scores for quality of reporting and internal validity. The high internal validity scores can be partially attributed to the fact that all of the studies used a within-subject crossover design. Furthermore, Figure 6. Forest Plot of Comparison: Users' Final Preference for REM Fitting or Initial Fit Using Fixed-Effects Meta-Analysis. The size of the square denotes the weight of each study, and the whiskers represent the 95% confidence interval around the effect size. Diamonds represent the pooled effect size and its 95% confidence interval. SE ¼ standard error; IV ¼ inverse of variance; CI ¼ confidence interval; REM ¼ real-ear measurements. four studies (Abrams et al., 2012;Boymans & Dreschler, 2012;Denys et al., 2019;Valente et al., 2018) blinded participants to the intervention they received, and, with the exception of Karimi et al. (2016), the order in which the two conditions were trialed was counterbalanced across participants. However, only one study (Valente et al., 2018) attempted to blind the assessors, and none reported the prior amplification characteristics used by the experienced users. External validity was relatively low due to uncertainty of whether the participants were representative of the target population; for example, the participants of Abrams et al. (2012) were limited to male veterans. Similarly, the majority of studies exhibited a low score in the power domain, because only two (Abrams et al., 2012;Valente et al., 2018) used power calculation to determine the sample size. The GRADE tool was independently used by each member of the review team to assess the quality of each individual outcome. The quality of evidence for each outcome is shown in Table 3. No rating was possible for the primary outcome of hearing-specific, healthrelated quality of life, as none of the studies reported outcomes in this category. Quality of Evidence The GRADE working group recommended that the review authors should downgrade the quality of evidence for all non-randomized controlled trials by two points (i.e., from high to low quality). However, this rule was not applied because all the studies used a crossover design, which the review team regarded as the best design to answer the review question. However, we did downgrade the quality of evidence in some other cases. For example, the GRADE score for self-reported listening ability was downgraded by one point, to moderate quality evidence, due to the combination of two closecall situations with respect to indirectness (i.e., some data were obtained after a short follow-up period and involved only male veterans) and imprecision (i.e., small sample sizes). Discussion This systematic review aimed to identify and assess the current evidence on whether or not the use of REM fitting improves the outcomes for adults. Six studies met the eligibility criteria and compared REM fitting with the initial fit. None of the studies reported hearingspecific health-related quality of life, which the review team regarded as the primary outcome. Most of the studies examined self-reported listening ability, speech intelligibility in quiet and noise, and preference. Other outcomes of interest (i.e., adverse events and generic quality of life) were not assessed in any of the studies. In two studies, outcomes were measured at the fitting session, indicating that the researchers did not allow participants to acclimatize to the hearing aid settings. The maximum follow-up duration was 6 weeks (e.g., Abrams et al., 2012), and there were no long-term outcomes. Most of the studies assessed the outcomes with experienced hearing aid users, and none of them detailed the amplification characteristics with which the participant was already familiar. Changing the amplification characteristics from what was familiar could impact short-term outcomes (Scollie et al., 2010;Walravens et al., 2020). Self-Reported Listening Ability Four studies included self-reported listening ability, and all showed an advantage for REM fitting, but this was not always statistically significant. The results of the meta-analysis revealed that the overall effect of REM fitting on self-reported listening ability was small (circa 4% benefit on APHAB) but statistically significant compared with the initial fit. Changing the model of the meta-analysis from fixed-to random-effect would not alter the pooled effect size or the 95% CI for this outcome and the subsequent outcomes because I 2 was either zero or negative (truncated to 0) in all of these outcomes. The reported advantage of REM fitting in studies that used different prescription formulae for the intervention and control conditions may not be solely attributed to the approach itself because of the difference in the prescription used for the two conditions. However, the pooled effect size for the two studies that kept the prescription constant was the same as that for the two studies that used different prescriptions for the two conditions. The lack of difference alternatively could be attributed to a combination of two factors: one that could increase the effect size (e.g., two different prescriptions instead of one) and one that could decrease it (e.g. first-time instead of experienced hearing aid users). The quality of evidence, as measured with GRADE, was moderate due to the combination of two close-call situations with respect to indirectness and imprecision. Speech Intelligibility in Quiet Speech intelligibility in quiet settings was assessed in four studies. All but one of the studies found a statistically significant advantage of REM fitting over initial fit. The only exception was Denys et al. (2019), who found a non-significant advantage. The results of the metaanalysis showed that REM fitting significantly improve speech intelligibility in quiet settings (with moderate effect size), at least for a presentation level close to the users' hearing threshold levels (the level used in the majority of the included studies). These findings are somewhat expected, given that the initial fit approach typically underestimates the prescription target for soft and conversational level speech (Munro et al., 2016). Difference between initial and REM fitting (95% CI) Certainty of the evidence (GRADE) Comments Hearing-specific, health-related quality of life --Not reported in any study Self-reported listening ability Assessed with: APHAB (range 1 to 99), SSQ (range 0 to 10) or SSQ12 (range 0 to 10) Follow-up: 2 weeks to 6 weeks No. of participants: 129 (4 crossover studies) SMD 0.20 higher (0.08 higher to 0.32 higher) Higher score indicates better self-reported listening ability with REM fitting, which is equivalent to about 4 points advantage on APHAB and 0.5 points advantage on SSQ. Speech intelligibility in quiet settings Assessed with: SRT (range -10 to 120) or SDS (range 0 to 100) Follow-up: 0 days to 6 weeks No. Higher percentage indicates more preference. Note. REM ¼ real-ear measurement; APHAB ¼ abbreviated profile of hearing aid benefit; SSQ ¼ Speech, Spatial and Qualities of Hearing Scale; SSQ12 ¼ a short form of the Speech, Spatial and Qualities of Hearing Scale; SRT ¼ speech recognition threshold; SDS ¼ speech discrimination score; SRTn ¼ speech recognition threshold in noise; HINT ¼ hearing in noise test; K-HINT ¼ Korean version of the hearing in noise test; SNR ¼ signal-to-noise ratio; CI ¼ confidence interval; GRADE ¼ Grading of Recommendations, Assessment, Development and Evaluations; SMD ¼ standardized mean difference; HL ¼ hearing level. GRADE working group grades of evidence-High certainty: We are very confident that the true effect lies close to that of the estimate of the effect. Moderate certainty: We are moderately confident in the effect estimate: The true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different. Low certainty: Our confidence in the effect estimate is limited: The true effect may be substantially different from the estimate of the effect. Very low certainty: We have very little confidence in the effect estimate: The true effect is likely to be substantially different from the estimate of effect. a We considered downgrading the initial quality of evidence to low quality because all the identified studies were nonrandomized controlled trials, but we did not because all the studies used a randomized crossover design, which the review team regarded as the best design to answer the review question. b We considered downgrading the quality of evidence due to serious risk of bias (e.g., lack of patient, caregiver or outcome assessor blinding), but we did not do so as the effect size and its 95% confidence interval (CI) were consistent across studies. c One point was deducted due to the combination of two close-call situations with respect to indirectness (i.e., some data were obtained after a short follow-up period and/or involved only male veterans) and impression (i.e., small sample sizes). d We considered downgrading the quality of evidence due serious inconsistency due to high and significant statistical heterogeneity, but we did not do so as the effect size and its 95% CI were on the same direction. e Two points were deducted due to serious indirectness (i.e., some data were obtained on the same fitting session) and the combination of two close-call situations with respect to risk of bias (i.e., plausible carryover effect), and impression (i.e., small sample sizes). f One point was deducted due to serious indirectness because the correlation coefficient (r) between interventions, which is required to precisely calculate the CI around the effect sizes for within-subject design studies, was estimated for both of the studies. The impact of estimating the r value was analyzed, and we found that the pooled CI for sound quality will change from including to excluding zero. The absolute benefit was about 2.5 dB HL advantage on speech recognition thresholds, which may just be noticeable to patients (Caswell-Midwinter & Whitmer, 2019a). The quality of evidence was low due to serious indirectness and the combination of two close-call situations with respect to risk of bias and imprecision. Speech Intelligibility in Noise Four studies measured speech intelligibility in noise, and all showed a trend of a better outcome with REM fitting. Although none of the studies individually showed an effect size significantly different from zero, the pooled effect indicated that listening in noise was significantly better with the REM fit than with the first fit. This occurred because of the consistency of the direction and magnitude of the effect size across all four studies. The absolute benefit was typically about 0.5 dB change in signal-to-noise ratio (SNR), which may not be a noticeable nor meaningful advantage to patients (Caswell-Midwinter & Whitmer, 2019b;McShefferty et al., 2015McShefferty et al., , 2016. The evidence was judged to be of low quality due to serious indirectness and the combination of two close-call situations with respect to risks of bias and imprecision. Sound Quality Two studies compared the sound quality between the two fitting approaches. In both studies, there was a significant advantage for REM fitting over initial fit (circa 0.17 points on a 5-point Likert scale). However, the overall effect was not significant due to the considerable statistical heterogeneity between the two effect sizes. The overall quality of evidence was downgraded by three points to very low due to very serious indirectness and the combination of two close-call situations with respect to risk of bias and imprecision. Preference Three studies collected the participant's preference at the end of their experiments. The REM fitting to initial fit preference ratios indicates that at least twice as many hearing aid users prefer their hearing aids to be fitted using REM fitting compared with the initial fit. Although participants in these studies were asked which response they preferred, none were asked why they had this preference. The evidence was judged to be of high quality due to the lack of serious limitations. Sensitivity Analysis Although the formula used to calculate the SDMs is considered to be the most appropriate for crossover design studies (J. P. Higgins et al., 2019), Hedge's g correction may provide better estimates (Lakens, 2013). The extent to which this correction would affect the pooled effect sizes was examined using unplanned sensitivity analyses. That is, the effect size for all outcomes was calculated with and without Hedge's correction. Both methods produced similar results; therefore, we reported only the uncorrected values. Quality of Evidence The rating assigned to each outcome was based on the answer to an important practical questions confronting clinicians: "does the use of REM to match the hearing aid's amplification characteristic to a validated prescription target improve outcomes in adult hearing aid users relative to initial fit (irrespective of the prescription used for initial fit)?" For this question, the quality of evidence ranged from high to low quality. However, if the question had been "does the use of REM to match the hearing aid's amplification characteristic to a validated prescription target improve outcomes in adult hearing aid users relative to initial fit (using the same prescription method for both conditions)?" the quality would have been lower (from moderate to very low quality), as many of the studies did not use the same prescription for the initial fit and REM conditions. The review team considered the former question to be more relevant because in practice, a variety of prescriptions are used for the initial fit condition, which was reflected in the studies. Future studies should aim to (a) improve the overall quality of the studies, (b) analyze first-time users separately from experienced users so that, if needed, a separate conclusion can be obtained for each subgroup, (c) allow for further adjustment to the amplification characteristics, (d) estimate the importance to participants of any benefit found, and (e) determine the reasons that participants report for any benefit they experience. Clinical Implication Although the assessment of quality of the evidence varied across outcomes, the direction of benefit consistently favored REMs. A moderate statistically significant effect was found for speech intelligibility in quiet settings and user's final preference. A small but statistically significant positive effect was reported for selfreported listening ability and speech perception in noise. These findings support many hearing professional guidelines, which recommend the routine use of REMs to match the hearing aid's amplification characteristics to a validated prescription target. However, though a statistically significant difference indicates that the null hypothesis is very unlikely, it does not speak to the value of the benefit, relative to the cost of providing it, or to the clinical significance of the findings. The minimum clinically important difference has not been established for most of the outcomes reported in these studies (Barker et al., 2016). Therefore, not only should we be cautious in terms of the estimated effect sizes, which are generally small (e.g., 0.5 dB SNR), the magnitude of meaningful benefit of using REMs, as perceived by hearing aid wearers, has yet to be determined. In addition, publication bias may exist because, in general, studies showing small, or null, effects are less likely to be submitted (or accepted) for publication (Kyzas et al., 2007;Turner et al., 2008;Tzoulaki et al., 2013). As a result, relying on published studies may result in an overestimate of the true effect size. We do not know if this is the case for REM studies, but caution is advised in case the importance of REM has been inflated. Limitations of the Systematic Review A potential limitation of this review is that it was restricted to studies that compared the fitting approaches using conventional hearing aids. This restriction eliminates all studies that used other types of amplification products (e.g., direct-to-consumer hearing devices). However, the majority of these devices are incapable of matching the generic prescription targets even after the use of REMs (Almufarrij et al., 2019;Chan & McPherson, 2015). Another potential limitation is that gray literature studies were not included in this review because there is no agreed method to systematically identify such studies. Including these studies might reduce the effect of publication bias because null findings are less likely to be published in peer-reviewed journals. At the time of publication, the review team were aware of two studies (published in a trade magazine) that measured the speech intelligibility in noisy settings for the two fitting approaches (Amlani et al., 2017;Leavitt & Flexer, 2012). The characteristics, main findings, and quality appraisals are reported in Supplementary Material 5. In both of these studies, there was a statistically significant advantage for REM fitting over initial fit, which is consistent with the findings of this review. The review findings may be limited to adults with mild to severe hearing loss, as none of the review studies included children or adults with profound hearing loss. Deviation From the Published Protocol Relevant outcome measures (i.e., sound quality and preference) that were not listed in our protocol were reported in some studies; therefore, they were included as additional secondary outcomes to this review. All prespecified subgroup, publication bias, and sensitivity analyses were not performed due to missing and/or limited data. Unplanned subgroup analyses were performed for all outcomes mainly due to differences in methods (i.e., either using the same or different prescription formulae for the intervention and control conditions) and population (i.e., first-time and experienced hearing aid users) across studies. Unplanned sensitivity analyses (with and without Hedge's g correction) were also performed for all outcomes due to the small sample sizes. Conclusions The review, the first on this topic, identified a small number of studies with limited numbers of participants. The quality of evidence for the different outcomes range from high to very low but favored REM fittings for all outcomes. The findings are consistent with recommendations in hearing professional guidelines that REMs should be used to match the hearing aid's amplification characteristics to a validated prescription target. However, further studies are needed to investigate if the benefits of REMs are clinically relevant, because minimum clinically important differences have yet to be established for most outcome measures. Ultimately, there needs to be a cost-effectiveness analysis to show that statistically significant benefits, which exceed the minimum clinically important difference, are worth the cost involved.
2021-04-27T06:16:33.626Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "e692af7390992f51c4257c484ad43b747df374d6", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2331216521999563", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "acab8edc3c5868c3b24e22edb0878d6d0f6a3221", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119279514
pes2o/s2orc
v3-fos-license
Magnetocaloric effect and Magnetothermopower in the room temperature ferromagnet Pr0.6Sr0.4MnO3 We have investigated magnetization(M), magnetocaloric effect(MCE) and magnetothermopower(MTEP) in polycrystalline Pr0.6Sr0.4MnO3, which shows a second-order paramagnetic to ferromagnetic transition near room temperature (TC = 305 K). However, field-cooled M(T) within the long range ferromagnetic state shows an abrupt decrease at TS = 86 K for H<3 T. The low temperature transition is first-order in nature as suggested by the hysteresis in M(T) and exothermic/endothermic peaks in differential thermal analysis for cooling and warming cycles. The anomaly at TS is attributed to a structural transition from orthorhombic to monoclinic phase. The magnetic entropy change is negative at TC but changes to positive at TS. Thermopower (Q) is negative from 350 K to 20 K, shows a rapid decrease at TC and a small cusp around TS in zero field. The MTEP reaches a maximum value of 25% for deltaH = 3 T around TC which is much higher than 15% dc magnetoresistance for the same field change. A linear relation between MTEP and magnetoresistance, and between delta Sm and Delta Q are found near TC. Further, ac magnetotransport in low dc magnetic fields (H less than or equal to 1 kOe), critical analysis of the paramagnetic to ferromagnetic transition and scaling behavior of the magnetic entropy change versus a reduced temperature under different magnetic fields are also reported. I. Introduction Perovskite manganites having the general formula R 1-x A x MnO 3 (R is a trivalent rare earth cation and A is divalent alkaline earth cation) have been extensively investigated during the last two decades due to colossal magnetoresistance and electroresistance effects exhibited by them , and rich physics behind their novel phase diagrams. In recent years, much attention has been paid to another exciting property of manganites known as the magnetocaloric effect (MCE). 1,2 The MCE refers to changes in adiabatic temperature (∆T ad ) or isothermal magnetic entropy (∆S m ) of a magnetic material upon magnetization and demagnetization. The MCE is attractive because it is the working principle of the emerging technology called magnetic refrigeration, which is considered to be energy efficient and environmentally friendly alloy is opposite in sign to the ∆S m and in Gd 5 S 2 Ge 2 , lattice entropy accounts for more than 50% of the total MCE. 8 The electronic part of entropy change is generally assumed to be small. Interestingly, both first-order and second-order phase transitions occur in the Pr 1-x Sr x MnO 3 series as a function of temperature or composition (x) 9,10 and hence it is an interesting series to investigate the influence of structural transition on ∆S m . While the compounds 0.2≤ x ≤ 0.45 show a second order paramagnetic (PM) to ferromagnetic (FM) transition, the half doped compound x = 0.5 shows a secondorder PM to FM transition at T C = 260 K followed by a first order FM to antiferromagnetic (AFM) transition at a lower temperature (T N = 125 K). This FM to AFM transition is also accompanied by tetragonal to monoclinic phase transition while there is no structural symmetry change across the PM to FM transition. 11 Bingham et al. 12 13 In contrast to the above two compounds, x = 0.4 is a room temperature FM and it shows an orthorhombic to a monoclinic structural transition around T S = 90 K much below the FM transition. 14 It is of our interest to investigate MCE around the Curie temperature which happens to be around room temperature (T C = 305 K) in Pr 0.6 Sr 0.4 MnO 3 and also how the MCE is affected by the low temperature structural transition. We also investigate the temperature and magnetic field dependences of thermo electric power (TEP) in this compound. Although temperature dependence of the TEP in zero field was reported for 0.48 ≤ x ≤0.6 in the Pr 1-x Sr x MnO 3 series, 15 neither TEP in zero field nor effect of magnetic field on TEP has been reported in x = 0.4 so far. In addition, we also report frequency dependent electrical transport in small dc magnetic fields (H = 0 to 1 kOe). Temperature and field dependences of four probe ac electrical transport in metallic or low resistivity have been seldom reported compared to dielectric studies in insulating manganites. 16 probe. The thermoelectric power (TEP) was measured between 365 K down to 10 K and using an automated homemade setup that is interfaced to the PPMS. The PPMS provides a platform to vary temperature and magnetic field. For TEP measurement, a rectangular sample was mounted between two copper blocks and two chip resistors were used as heat source. In this method, at a stabilized base temperature of the cryostat, a small temperature gradient (∆T = 1 K) is generated across the sample length, and the thermoelectric voltage is recorded using copper leads. The temperature difference was measured using a Chromel-Constantan differential thermocouple after steady state was reached. The spurious and offset voltages of the measuring circuit were eliminated by reversing the temperature gradient and averaging the recorded voltages. The apparatus was tested for accuracy by measuring Q on a thin piece of pure lead with respect to copper 19 . The applied magnetic field was perpendicular to the direction of the dc current and length of the sample in both magnetoresistance and magnetothermopower measurements. Four probe ac electrical impedance (Z = R+iX) as a function of frequency (f = 100 kHz-10 MHz), temperature and magnetic field was measured using an Agilent 4294A impedance analyzer and PPMS. T. It suggests that the step-like decrease observed while cooling is not due to AFM transition. To characterize the low temperature anomaly, we carried out differential thermal analysis (DTA) which is shown in the inset (a) of fig.1. The DTA technique makes use of two Pt-100 resistance thermometers connected in differential mode. The sample is placed on one of the thermometers while the other one is used as a reference. The temperature differences between these two Pt thermometers is read as a function of magnetic field when the base temperature changes. 20 The DTA in our sample shows an exothermic peak at T = 89 K and endothermic peak at T = 100K while cooling and warming, which confirms the 1 st order nature of low temperature transition. We attribute the anomaly at T S is due to orthorhombic to monoclinic structural phase transition upon cooling as suggested by neutron diffraction study on a similar composition by C. Ritter et al. 21 Ritter et al. also found that the structural transition is not complete even at 1.6 K. Their refinement of neutron diffraction data gave 88% monoclinic (I2/a) and 12% orthorhombic (Pnma) phases at 1.6 K. C. Boujleben et al. 22 determined coexistence of 73% monoclinic and 27% orthorhombic phases in their sample from neutron diffraction study. (a) Magnetization and the critical behavior The main panel of Fig To understand the order of magnetic phase transition and critical behavior by using scaling hypothesis we have taken M(H) isotherms at each 2 K difference from 294 K to 320 K (see Fig. 3(a)). The slope of M 2 Vs H/M curves (Arrot plots) can determine the order of phase transition. The positive slope indicates a second -order transition while a negative slope corresponds to first-order transition. 23 The positive slope of M 2 vs H/M curves above and below T C confirms that the high temperature PM-FM transition is of second order in nature. According to the scaling hypothesis, the critical behavior a magnetic systems showing a second-order magnetic phase transition near the Curie temperature can be characterized by a set of critical exponents which are interrelated. 24 The critical exponents associated with the spontaneous magnetization (M s ), inverse susceptibility (χ -1 ) and magnetization isotherms at T C were calculated by fitting the experimental isotherms using the following scaling relations for second-order phase transitions: 25 26 Oesterreicher and Parker 28 using the mean-field approach derived a proportional relation between the field dependent change in the magnetic entropy (∆S m ) and applied field (H). At T = T C , the magnetic entropy change was found to vary as , where q is the number of magnetic ions per mole, R is the gas constant, and J is the total angular momentum. Above T C , ΔS m is quadratic in magnetic field ( where, p e is the effective magnetic moment in the paramagnetic state and θ is the paramagnetic Curie temperature obtained from the inverse susceitpibility . Recently, Franco et. al. 29 showed that ∆S m in a second-order PM-FM phase transition can be expressed as ∆S m α H n , for T < T C if a proper temperature scaling is introduced. They showed that the plots of the magnetic entropy change normalized to it is maximum value (∆S m /∆S Max ) for different magnetic fields versus a reduced temperature Fig. 6 shows ∆S m /∆S Max versus the reduced temperature θ for different magnetic fields. It is found that the data for all the magnetic fields collapse into a single master curve. (C). Direct (dc) and alternating current (ac) electrical Transport Now let us discuss the dc and ac electrical transport. Fig. 7 shows the dc electrical resistivity as a function of temperature under H = 0 and 3 T magnetic fields. While lowering temperature, ρ(T) shows insulating ( / 0) T ρ ∂ ∂ < behavior in the paramagnetic state. It shows a small kink around T = 305 K (≈T C ) and but goes through maximum around T p = 213K. The departure of T p from T C is most likely due to the presence of high resistive grain boundaries. 30 As the temperature decreases, magnetization of ferromagnetic grains increase and the resistivity falls below T p once the percolation threshold for metallic conduction is reached. The structural transition at temperature T S hardly affects the resistivity except for introducing a slight slope change. When a magnetic field of H = 3 T is applied, the kink at T C is suppressed and, T p shifts by 9 K and magnitude of the resistivity below T C decreases. The dc magnetoresistance, MR = [ρ(0)-ρ(3T)]/ρ(0T) is shown in right scale of Fig. 7. The dc MR reaches ≈-15% at the T C and increases up to -39% at 10 K. The structural transition at temperature T S shows up as a slight slope change in the resistivity. It is known that electrical conduction in the paramagnetic insulating state of manganites, in general, is dominated by adiabatic hopping of small polarons, which obeys the relation ρ(T) = ρ 0 Texp(E p /k B T) where E p is the activation energy for the electrical transport and k B is the Boltzmann constant. E p =W H +E s , where E s is the activation energy for thermopower and W H is the binding energy for polaron. 31 The plot of ln(ρ/T) versus 1/T (see the inset) above T C shows a linear dependence which suggests that thermally activated polaronic conduction dominates the charge transport in the paramagnetic state in our sample. The estimated activation energy is E p = 114 meVin zero magnetic field. Now let us consider the ac electrical transport. Fig. 8 shows the temperature dependence of the ac resistance (R) and reactance (X) of Pr 0.6 Sr 0.4 MnO 3 for three frequencies, f = 1, 5 and 10 MHz measured in zero and low dc magnetic fields (H = 300, 500, 700, and 1 kOe). When f = 1, 5, a small step like increase in R occurs around T C = 304 K. The feature is more pronounced for f = 10 MHz. We can also see a step-like decrease at T S = 86 K, which closely correlates with the magnetization data. The applied magnetic field decreases the magnitude of anomaly at T C and at T S . Under a magnetic field of H = 1 kOe, the step-like increase at T C is completely suppressed leading to ac magnetoresistance of -4%. This value is an order of magnitude smaller than the ac magnetoresistance near T C found in La 0.7 Sr 0.3 MnO 3 . 32 In contrast to the ac resistance, X(T) in zero field shows clear anomalies both T C and T S even for f = 1 MHz. The anomalies are clearly suppressed with increasing dc magnetic field. There is a qualitative change in the behavior of T, H) where R is the ac resistance and X is the reactance. The reactance X = 2πfL is due to the self inductance (L) of the sample which is related to the ac transverse permeability of the sample (L=Gμ t where G is the geometrical factor and μ t is the transverse permeability). When the skin depth is larger than thickness of the sample and current flow is uniform in the sample. The circular ac magnetic field created by the ac current interacts with the magnetization of the sample. At the onset of ferromagnetic transition, the μ t in zero external magnetic field increases rapidly which causes the L, hence the X(T), to increase abruptly at T C . The decrease of X(T) around T S = 86 K indicates the decrease of μ t due to the structural phase transition. In the presence of an axially applied dc magnetic field, μ t is expected to decrease since the small ac magnetic field becomes inefficient to rotate the magnetization away from the direction of the dc magnetic field. Hence, the maximum decrease of μ t occurs at temperatures just below the T C . Hence, the X(T) just below T C decreases with increasing H. As the frequency increases, the skin depth also decreases which causes the ac resistance (R) to increase since the available cross sectional area for the current flow decreases. Since δ ∝1/√(fµ), R increases abruptly at T = T C and also shows a feature at T= T S when f = 10 MHz The decrease in skin depth also affects the behavior of the reactance. The transverse permeability decreases and hence δ increases with increasing strength of the dc magnetic field, which results in suppression of the ac resistance near T C . The magnitude of ac magnetoresistance depends on the resistivity and transverse permeability of the sample. Due to high value of transverse permeability (μ ≈ 10 4 ) and low resistivity (ρ = 100-400 μΩ cm), amorphous and nanocrystalline alloys show much larger ac magnetoresistance (= 70-90 % for H = 10-100 Oe) than manganites but it saturates in kOe field. 33 T. The Q is negative (≈ -20 μV/K) at 360 K and it smoothly decreases (Q becomes less negative) as T C is approached from the high temperature side. A rapid change in Q occurs around T C , and then Q decreases smoothly towards zero in the ferromagnetic state as T approaches 90 K. Around T S = 86 K, Q shows a dip and then it increases slowly. It is to be noted that the rapid change in Q occurs at T C rather than at the temperature where the dc resistivity shows a maximum (T p ) which clearly indicates that thermopower probes the intrinsic property of the grain and insensitive to grain boundaries. In electrical transport measurement, voltage drop across the high resistive grain boundary is also sampled along with the voltage drop in the low resistance ferromagnetic grains. Because no current flows, thermopower of individuals grains is additive, independent of the resistivity of intergranular connections and hence Q probes the intrinsic phase transition within grains. An applied external magnetic field of H = 3 T eliminates the anomaly around the T C but hardly affects Q much below 250 K, unlike the influence of H on ρ(T). (D). Thermoelectric power in zero and non zero magnetic field When the electrical transport in the paramagnetic state (T >T C ) is dominated by thermally activated hopping of polarons, Q is expected to obey the relation where k B is Boltzmann constant, e is electronic charge, and α′ is a sample dependent constant related to the kinetic energy of polarons. 34 If α′ < 1, the transport follows the small polaron hopping model and for α′ >2 it follows the large polaron hopping. 35 We show Q versus 1/T and the fit based on the above relation in the inset (b) of fig.7. The calculated values of α′ is 0.515 for 0 T is less than 1. The calculated activation energy for Q is E s = 26 meV which is much lower than the activation energy (E ρ = 117 meV) obtained from the electrical transport. Such a large difference between activation energies for electrical resistivity and thermopower is considered to be a hallmark of polaronic transport above T C . The Q is negative in the temperature range from 30 to 360 K, which suggests that the charge carries are predominantly electron-like at the Fermi level. Negative value of Q over a wide temperature was also found for x = 0.48-0.55 in Pr 1-x Sr x MnO 3 series. 15 The composition with x < 0.5 are supposed to be hole doped, however it is not uncommon to find negative Q for x < 0.5. For example, Q in La 1-x Ca x MnO 3 series shows negative value at room temperature even for x = 0.3 or Q shows a change of sign as a function of temperature. 36,37,38 The sign of Q in manganites is affected by contribution from spin disorder term (Q s = -20 μV/K) in the paramagnetic state and carrier entropy due to presence of correlated polarons or charge ordered nanoclusters and these make the analysis difficult unless the measurement is extended to very high temperature. 29, 30 ,39 We are more interested in the change of Q with the magnetic field rather than the value and sign of S in zero field alone. The magnetothermopower defined as MTEP = [Q(H)-Q(0)]/Q(0) is shown on the right scale. The MTEP is zero at 360 K and it increases in magnitude with lowering temperature, goes through a peak around T C and then decreases to zero above 250 K. The value of MTEP reaches 25% at the T C f or ∆H = 3 T. The observed value of MTEP is comparable to the reported value 38% for ΔH = 5 T at T C = 225 K in La 0.67 Ca 0.33 MnO 3 film by Boxing Chen et al. 40 Available literature on MTEP in manganites are very few compared to magnetoresistance . 41,42,43,44 We have also measured the field dependences of Q and ρ at selected temperatures (T = 290-320 K with ∆T = 5 K). We plot the MTEP and MR as a function of H in Fig. 10 (a) and (b), respectively. The isotherms at T= 305 and 310 K showed a large change in Q and ρ as the field is swept from H = 0 to 3 T. No hysteresis was observed while reducing the field to zero. In order to seek a correlation between the MR and the MTEP, we plot [-∆Q/Q(0)] against [-∆ρ/ρ(0)] for the above temperatures in Fig. 10 We have also attempted such a scaling in our compound. In Fig. 11 other compounds is highly desirable for comparison as well as to understand a common mechanism relating magnetothermopower, magnetoresistance and magnetic entropy change. IV. Summary In summary, we have investigated magnetization, direct and alternating current magnetoresistance, magnetocaloric effect and magnetothermopower in Pr 0.6 Sr 0.4 MnO 3 sample having ferromagnetic transition just above room temperature (T C = 305 K). Magnetization upon cooling showed a sharp decreases at T S = 86 K within the long range spin ordered state and this feature was accompanied by and exothermic peak in differential thermal analysis. This low temperature anomaly in magnetization was attributed to orthorhombic to monoclinic structural transition while cooling. The structural transition leads to inverse magnetocaloric effect at T S whereas normal magnetocaloric effect peaks around T C . Coexistence of a large ∆S m value of 3.416 J/Kg K and refrigeration capacity of 103.34 J/Kg for ∆H = 3 T and magnetothermopower (=25 % for ∆H = 3 T) make this compound interesting for applications for room temperature magnetic refrigeration , magnetically tunable thermoelectric power generators and heat pumps. In addition, we have found a close correlation between magnetoresistance, magnetothermopower and magnetic entropy change in the same compound. We have also shown that ac electrical transport enacts the behavior of the ac susceptibility and it provides a simple means of investigating interplay between charge transport and magnetism simultaneously. T S is the structural transition temperature.
2019-04-13T04:06:27.357Z
2012-10-18T00:00:00.000
{ "year": 2012, "sha1": "651abf6c75fff72ea9ea71457224493e1ddfa167", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1210.5216", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "651abf6c75fff72ea9ea71457224493e1ddfa167", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
211114110
pes2o/s2orc
v3-fos-license
Tendon Extracellular Matrix Remodeling and Defective Cell Polarization in the Presence of Collagen VI Mutations Mutations in collagen VI genes cause two major clinical myopathies, Bethlem myopathy (BM) and Ullrich congenital muscular dystrophy (UCMD), and the rarer myosclerosis myopathy. In addition to congenital muscle weakness, patients affected by collagen VI-related myopathies show axial and proximal joint contractures, and distal joint hypermobility, which suggest the involvement of tendon function. To gain further insight into the role of collagen VI in human tendon structure and function, we performed ultrastructural, biochemical, and RT-PCR analysis on tendon biopsies and on cell cultures derived from two patients affected with BM and UCMD. In vitro studies revealed striking alterations in the collagen VI network, associated with disruption of the collagen VI-NG2 (Collagen VI-neural/glial antigen 2) axis and defects in cell polarization and migration. The organization of extracellular matrix (ECM) components, as regards collagens I and XII, was also affected, along with an increase in the active form of metalloproteinase 2 (MMP2). In agreement with the in vitro alterations, tendon biopsies from collagen VI-related myopathy patients displayed striking changes in collagen fibril morphology and cell death. These data point to a critical role of collagen VI in tendon matrix organization and cell behavior. The remodeling of the tendon matrix may contribute to the muscle dysfunction observed in BM and UCMD patients. Introduction Ullrich congenital muscular dystrophy (UCMD), Bethlem myopathy (BM), and myosclerosis myopathy (MM) are diseases caused by mutations in each of the three genes (COL6A1, COL6A2, and COL6A3) encoding the extracellular matrix protein collagen VI. Prevalence is estimated at 0.77:100,000 in Bethlem myopathy and 0.13:100,000 in Ullrich CMD [1]. These disorders have a variable clinical severity and a characteristic combination of joint hyperlaxity and muscle contractures. Ullrich congenital muscular dystrophy is a severe disorder characterized by congenital muscle weakness, proximal contractures, and distal laxity; BM is a mild/moderate form characterized by axial and proximal muscle weakness with prominent distal contractures; MM is characterized by slender muscles with firm 'woody' consistence and the restriction of movement in many joints [1]. The existence of such a peculiar contracture pattern associated with each clinical form, suggests a role for the collagen VI-based matrix in tendons, which was underestimated until now. Tendon architecture comprises few cells, named tenocytes, interspersed within an extracellular matrix (ECM) mainly composed of collagen fibrils organized in longitudinal arrays, whose function is to transmit forces to joint elements without undergoing deformation or damage [2]. Fibrils are made of collagen type I, which represents the main component, and of collagen type III, V, VI, XII, and XIV. Proteoglycan and glycoproteins are also present between fibrils [3]. Tenocytes produce the ECM in tendons and are located along fibrils, lined by a morphological structure known as the pericellular matrix (PCM), a sort of specialized ECM [4,5]. Collagen type V and type VI, fibrillin, and decorin contained in the PCM are reported as regulatory factors for the assembly of collagen fibrils; in addition, also integrins and proteins related to cell adhesion, matrix turn-over, and signal transduction are identified in the PCM [5], which is therefore supposed to drive tendon repair when necessary. Localization of collagen VI has been determined both at the interfibrillar tendon ECM, appearing as a linking element between coarser collagen type I fibrils, and in the PCM where its fibrils anchor the tenocyte's membrane and develop extensive networks based on multiple interactions [4,6]. Fibrils appear at the TEM as beaded filaments with a regular 100 nm pace, which are built extracellularly by end-to-end joining of tetramers [6], in turn composed (intracellularly) of two equal sets of three distinct chains forming heterotrimers. There are a variety of these elemental chains; those named alpha1, alpha2, and alpha3 are the most widely expressed and make-up the most common heterotrimer. In the extracellular space, the organization of collagen VI fibrils may vary from extended interconnections to thicker, parallel arrays of beaded fibrils, on the basis of interactions with other ECM elements and cell receptors [6][7][8][9][10]. More recent research led to the identification of additional collagen VI elemental chains in humans, named alpha5 and alpha6, which are similar to alpha3 but are expressed only at specific sites [11][12][13][14]. Regarding tendons, where the most expressed heterotrimer is the alpha1-alpha2-alpha3 [15], alpha5 is found at the myotendinous junction while alpha6 is not expressed. While we previously demonstrated that collagen VI, through its interaction with the CSPG4/NG2 transmembrane proteoglycan, regulates specific cellular functions, including cell polarization and migration, the impact of mutations in collagen VI genes in human tendon function has been poorly explored, mainly due to the difficulty in obtaining biopsies from patients' tendons [16]. So far, only one study on tendon fibroblasts of a UCMD patient is reported [16]. In this paper we confirm the matrix tendon alterations in another UCMD patient with a mutation in the COL6A1 gene and show for the first time the presence of a dysfunctional fibrillogenesis and an altered cell behavior in tendon fibroblasts of a patient with BM phenotype. In this paper, we demonstrate for the first time that two myopathies, the severe Ullrich and the benign Bethlem, representing opposite ends of the collagen VI-related myopathies spectrum, have similar alterations of the tendon matrix, which also involves, in addition to the collagen VI, collagen I and XII. By in vitro studies we show that alterations in the collagen VI network affect cell polarization and migration. Patients We obtained a biopsy of the pedidium tendon from a UCMD patient (UCMD) with heterozygous mutation in COL6A1 [17], and a biopsy of the piriformis tendon was obtained from a BM patient with COL6A2 exon 6 c.802 G>A het (p.Gly268Ser) (P1 in [18]) that received surgical treatment for a femur fracture. Two piriformis and two pedidium tendons were obtained from healthy volunteers subjected to surgical intervention. Sample processing followed previously described procedures [16]. All subjects gave their informed consent before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee at the Rizzoli Orthopedic Institute (project identification code, PG0006743, approval date: 5th July, 2017). Tendon Cell Cultures Tenocytes, the mature cell type of the tendon, have a low proliferative capacity, thus, cultured tendon cells mainly derive from progenitor cells, which display a fibroblast-like phenotype [19]. We used "tendon fibroblasts" to indicate tendon-derived cultures. To obtain tendon fibroblast cultures, tendon fragments were subjected to mechanical dissociation, and maintained in Dulbecco's modified Eagle medium (DMEM) containing 1% antibiotics plus 10% fetal bovine serum (FBS) [20] and 0.25 mM L-ascorbic acid was added to the medium to allow collagen VI tetramer secretion [21]. Cells were expanded for two passages and stored in liquid nitrogen. All in vitro experiments were performed on cells at passage three to seven. Western Blot Analysis Cultured tendon fibroblasts were harvested by scraping. The media recovered from the different culture conditions were concentrated with Vivaspin sample concentrators (Vivaspin 2 MWCO10000, GE Healthcare, Amersham, Pittsburgh, PA, USA) according to the manufacturer's operating procedures. Cell lysates and concentrated culture media were resolved by standard SDS-PAGE, electro-blotted onto a nitrocellulose membrane, and incubated with antibodies against α3(VI), α1(VI), and α5(VI) chains [16]; NG2 proteoglycan antibody (Millipore) was used as previously reported [16]; tenomodulin and actin (Santa Cruz) were used as loading controls. Primary antibodies were followed by incubation with anti-mouse or anti-rabbit horseradish peroxidase (HRP)-conjugated secondary antibodies. Chemiluminescent detection of proteins was carried-out with the enhanced chemiluminescent ECL detection reagent kit (GE Healthcare Amersham, Pittsburgh, PA, USA) according to the supplier's instructions. Gelatin Zymography To assess gelatinase activity (MMP2 and MMP9), cells were treated with serum-free medium and conditioned media were collected and concentrated (Vivaspin 2, 10.000 MWCO, Sartorius, Göttingen, Deutschland Germany). Gelatinase activity was determined under non-reducing conditions on a 7.5% SDS-polyacrylamide gel containing 2 mg/mL gelatin (Mini-PROTEAN II system; Bio-Rad Laboratories Ltd, Hempstead, UK). Gels were washed in 2.5% Triton X-100 to allow renaturation of MMPs, before they were transferred to a solution containing 50 mM Tris (pH 7.5), 5 mM CaCl 2 , and 1 mM ZnCl 2 , followed by incubation at 37 • C for 18 h. After staining with Coomassie brilliant blue R250 (Bio-Rad Laboratories, Hercules, CA, USA), pro-MMP2 and active MMP2 were observed as white lysis bands produced by gelatin degradation. Transmission Electron Microscopy Studies Tendon fragments were fixed with 2.5% glutaraldehyde in 0.1 M cacodylate buffer and 1% osmium tetroxide and embedded in Epon812 epoxy resin following standard procedures. Sections were stained with uranyl acetate and lead citrate and observed with a Jeol Jem-1011 transmission electron microscope operated at 100 kV. For rotary shadowing, tendon fibroblasts were grown onto coverslips and, after confluence, were treated for 24 h with 0.25 mM L-ascorbic acid. In vitro immunolabeling was performed with a polyclonal antibody against α3(VI) chain. Rotary shadowing of immuno-gold labeled samples was performed following reported procedures [25]. Replicas were washed with distilled water, collected on copper grids, and examined with a Jeol Jem-1011 transmission electron microscope operated at 100 kV. Quantitative RT-PCR Total RNA was extracted with TRI Reagent solution (Invitrogen) and then treated with TURBO DNase (Invitrogen, Carlsbad, CA, USA). cDNAs were synthesized using the High-Capacity RNA-to-cDNA Kit (Applied Biosystems, Foster City, CA, USA), according to the manufacture's protocol. Gene expression was determined by qPCR, using Power SYBR Green PCR master mix (Applied Biosystems). Scratch Wound Healing Assay Normal, BM, and UCMD tendon fibroblasts were seeded onto coverslips and cultured to confluence in 10% FBS-containing medium for 24 h. A straight scratch simulating a wound was made across the center of the cell monolayer, using a sterile 200-µL pipette tip. After 6 h, cells were fixed with cold methanol and processed for collagen VI and NG2 or Golgin-97 immunofluorescence analysis. For tracking analysis, cells were grown to confluence on tissue culture dishes, and after scratching, phase contrast images were acquired for 20 h at regular intervals of 15 min. Single cell migration was monitored by evaluating the accumulated distance (path length from start to end point) and the Euclidean distance (the shortest distance between start and end point). Statistical Analysis Statistical analysis was performed by a Student's t-test with GraphPad Prism version 8.0.0 for Windows (GraphPad Software, San Diego, CA, USA). The results were considered statistically significant for p-values less than 0.05. Collagen VI Expression in Tendon Fibroblast Cultures of BM and UCMD Patients Collagen VI is a component of the PCM of normal tendon fibroblasts [25]. In order to define whether BM and UCMD patient cells are able to organize a collagen VI pericellular matrix, proliferating tendon fibroblasts from patients and controls were grown in the presence of L-ascorbic acid. Secreted collagen VI was studied by immunofluorescence and Western blot analysis. In normal tendon cultures, collagen VI microfilaments distributed along the tendon fibroblasts processes, while in BM and UCMD cultures, aggregates of collagen VI appeared to mainly deposit among the cells ( Figure 1A, left panels). In long-term patient cultures, collagen VI organization displayed a spot-like appearance with respect to the filamentous arrangement of the control ( Figure 1A, right panels). The organization of collagen VI was further explored by electron microscopy analysis of rotary shadowed replicas of proliferating cells, immunolabeled with an anti-α3(VI) chain specific primary antibody and a 5-nm colloidal gold-conjugated secondary antibody. In control replicas, typical webs of collagen VI appeared well laid out and anchored to the cell surface along the cell processes, a pattern consistent with the proposed function of collagen VI in mediating the attachment of cells to the substrate. In contrast, UCMD and BM cultures displayed tangled webs and short single microfilaments of collagen VI deposited on the substrate. Aspects of collagen VI microfilaments and web association with the cell processes were rare ( Figure 1B). Statistical analysis was performed by a Student's t-test with GraphPad Prism version 8.0.0 for Windows (GraphPad Software, San Diego, CA, USA). The results were considered statistically significant for p-values less than 0.05. Collagen VI Expression in Tendon Fibroblast Cultures of BM and UCMD Patients Collagen VI is a component of the PCM of normal tendon fibroblasts [25]. In order to define whether BM and UCMD patient cells are able to organize a collagen VI pericellular matrix, proliferating tendon fibroblasts from patients and controls were grown in the presence of L-ascorbic acid. Secreted collagen VI was studied by immunofluorescence and Western blot analysis. In normal tendon cultures, collagen VI microfilaments distributed along the tendon fibroblasts processes, while in BM and UCMD cultures, aggregates of collagen VI appeared to mainly deposit among the cells ( Figure 1A, left panels). In long-term patient cultures, collagen VI organization displayed a spot-like appearance with respect to the filamentous arrangement of the control ( Figure 1A, right panels). The organization of collagen VI was further explored by electron microscopy analysis of rotary shadowed replicas of proliferating cells, immunolabeled with an anti-α3(VI) chain specific primary antibody and a 5-nm colloidal gold-conjugated secondary antibody. In control replicas, typical webs of collagen VI appeared well laid out and anchored to the cell surface along the cell processes, a pattern consistent with the proposed function of collagen VI in mediating the attachment of cells to the substrate. In contrast, UCMD and BM cultures displayed tangled webs and short single microfilaments of collagen VI deposited on the substrate. Aspects of collagen VI microfilaments and web association with the cell processes were rare ( Figure 1B). Western blot analysis of cell lysates and conditioned media from BM, UCMD, and control, in the presence of ascorbic acid, showed comparable amounts of collagen VI α1(VI) and α3(VI) chains ( Figure S1A). Changes in expression of the α5(VI) chain were detected in the UCMD cell lysate and medium, with decreased protein level in the cell lysate and an increase in the medium. In BM cells and medium, the α5(VI) chain was comparable with that of the normal control ( Figure S1A). Collagen VI-NG2 Axis is Disrupted in Tendon Fibroblasts from BM and UCMD Patients Given the critical role of the NG2 proteoglycan in mediating the attachment of collagen VI microfibrils to the cells [25], we studied the expression pattern of NG2 in BM and UCMD tendon cultures. Confocal microscopy with anti-NG2 and anti-collagen VI antibodies showed that in control cells NG2 was present at the cell membrane and clearly co-localized with collagen VI. In BM cultures, NG2 proteoglycan showed a clustered distribution that correlated with the anomalous aggregates of collagen VI. In UCMD tendon cell cultures, NG2 staining was barely detectable while collagen VI was apparently not associated with the cell surface ( Figure 2A). In agreement with a reduced association of collagen VI with NG2 proteoglycan (Figure 2B), the k2 co-localization coefficient was reduced in BM and UCMD cells when compared with normal cells ( Figure S1B). A consistent reduction of NG2 was also demonstrated by Western blot analysis in UCMD cells, while in BM cells the protein level was similar to that of normal cells ( Figure 2C), as indicated by densitometric analysis ( Figure 2D). Interestingly, in UCMD, the CSPG4 mRNA transcript, encoding for NG2 proteoglycan, was comparable with that of the control ( Figure 2E), suggesting that the reduced protein amount could be due to a post-transcriptional regulatory mechanism. These data indicate that the organization of endogenous NG2 by tendon cells is affected by the presence of collagen VI mutations, thus potentially having a role in the function of the collagen VI-based pericellular matrix. Disruption of Collagen VI-NG2 Axis Affects BM and UCMD Cell Polarization During Migration Given the role of the collagen VI-NG2 axis in stabilizing the lagging end of cells during migration [16], we subjected BM and UCMD tendon fibroblasts to scratch wound assay, to study cell motility and migration in response to in vitro injury. Confluent tendon fibroblast cultures were scratched and after six hours were studied by immunofluorescence analysis. In agreement with Disruption of Collagen VI-NG2 Axis Affects BM and UCMD Cell Polarization During Migration Given the role of the collagen VI-NG2 axis in stabilizing the lagging end of cells during migration [16], we subjected BM and UCMD tendon fibroblasts to scratch wound assay, to study cell motility and migration in response to in vitro injury. Confluent tendon fibroblast cultures were scratched and after six hours were studied by immunofluorescence analysis. In agreement with previous studies [25], control cells displayed collagen VI at the trailing edge of the moving cells, with typical collagen VI microfilaments anchoring the cell rear to the substrate ( Figure 3A). By contrast, BM and UCMD cells showed collagen VI mainly deposited among the cells. Few small aggregates were associated with the cell rear, suggesting that in patient-derived cells the collagen VI-NG2 binding is lost during migration ( Figure 3A). To explore its functional consequences, we studied cell polarization using the scratch wound assay. Tendon fibroblasts facing the wound promptly migrated toward the empty space created by the scratch. As a marker of cell polarization, we used Golgin-97, a protein of the Golgi apparatus, which in migrating cells is oriented toward the leading edge with respect to the cell nucleus. As expected, most of the migrating control cells showed uniform orientation towards the wound edge. Strikingly, UCMD cultures displayed a strong increase of incorrectly polarized cells. Although less marked, the number of incorrectly oriented cells was also significant in the BM culture ( Figure 3B,C). We monitored the migration of single cells after scratch wounding and imaged cells at constant intervals (15 min) for 20 h. The migration speed of cells facing the wound was assessed by evaluating the ratio between accumulated distance (the sum of the trajectory distances between start and end points) and time. The persistence of directional migration was determined by calculating the ratio of the accumulated distance to the Euclidean distance (the shortest distance between start and end points). Interestingly, both measurements were significantly increased in BM and UCMD patient cells, indicating that, although faster, collagen VI-deficient cells display a random trajectory compared to normal cells ( Figure 3D,E). This new experiment adds important information about the mechanism regulated by collagen VI during cell migration. By anchoring the cell rear to the substrate, collagen VI contributes to the stability of cell direction. (A) Immunofluorescence microscopy of collagen VI (COL6) and NG2 double-labeled cells subjected to scratch wound assay. Collagen VI microfilaments and NG2 co-localize at the rear of control (CTRL) cells. In BM and UCMD migrating cells, collagen VI and NG2 staining appears reduced, with a spot-like pattern. Nuclear staining, DAPI. Scale bar, 50 µm. (B) Immunofluorescence microscopy of Golgin-97, a marker of the Golgi apparatus, in tendon fibroblasts subjected to the scratch wound assay. In migrating cells of the control (CTRL), the Golgi apparatus was located between the cell's leading edge and the nucleus. In BM and UCMD samples, cells at the wound edge appear incorrectly oriented (arrows indicate the direction of the single cells). Scale bar, 20 µm. (C). The graph indicates the percentage of cells whose Golgi apparatus is not facing the wound. Data represent mean ± SE of three independent experiments. *** p < 0.0001. (D) Tracking of the individual cell movement during In vitro scratch wound assay. On the left, phase contrast images at T0 of scratched samples. Colored dots identify the position of the nucleus of monitored cells. On the right, graphical representation of the tracks of each cell monitored for 20 h (T20). Scale bar, 0.5 mm. (E) Graphs showing the mean velocity (upper) and the ratio of the accumulated distance (Ad) to the Euclidean distance (Eu) (lower) of control (CTRL), BM, and UCMD patient cells. Data represent mean ± SE of three independent experiments. * p < 0.05, ** p < 0.001. Collagen VI Alterations Affect the ECM Organization in BM and UCMD Tendon Fibroblasts To better define the influence of collagen VI mutations on the assembly of the extracellular matrix, we studied the expression of some collagen VI-related ECM proteins potentially relevant for tendon matrix function. Fibronectin is associated to the pericellular matrix of tenocytes in vivo, and is expressed by tendon fibroblasts in vitro [16]. In normal tendon cultures, a fine network of intertwined fibronectin fibrils was detected, which co-localized with collagen VI fibrils. Similarly, collagen I and collagen XII, two major components of the matrix tendon, displayed a filamentous arrangement that is partially co-distributed with the collagen VI network ( Figure 4A, upper panels). In contrast, in BM and UCMD cultures, fibronectin displayed a parallel arrangement, with fibrils running parallel to the long axis of the cells. In addition, the organization of collagen I and collagen XII were affected, as indicated by the presence of aggregates matching with collagen VI anomalous deposits ( Figure 4A, middle and lower panels). Furthermore, we investigated the expression and activity of the gelatinase MMP2, which is involved in tendon matrix turnover and remodeling [26]. Western blot analysis showed an increase in expression of the 63 kDa active form of MMP2 in conditioned medium from BM and UCMD patient cultures, while pro-MMP2 levels were unchanged as compared to controls ( Figure 4B). In contrast, the expression of MMP2 and pro-MMP2, in cell lysates of the same BM and UCMD cultures, was comparable with normal controls. By gelatin zymography, we found increased MMP2 activity in the conditioned medium of UCMD patient cells, while, in patient cell lysate, it was comparable to that of normal controls. In contrast, BM cell lysate and medium did not show significant changes in gelatinolytic activity, compared to control cells ( Figure 4C). The level of MMP2 mRNA transcripts in cells derived from patients was similar to that of normal cells ( Figure 4D). These data indicate that in the presence of mutated collagen VI chains, MMP2 may undergo proteolytic activation after secretion in the extracellular matrix, though its activity is differently regulated in cells of BM with respect to that of UCMD patients. UCMD and BM Tendon Biopsies Show Changes Consistent with ECM Remodeling To gain a better understanding of the tendon matrix in vivo, we performed an ultrastructural analysis on a pedidium tendon biopsy obtained from a UCMD patient and on a fragment of the UCMD and BM Tendon Biopsies Show Changes Consistent with ECM Remodeling To gain a better understanding of the tendon matrix in vivo, we performed an ultrastructural analysis on a pedidium tendon biopsy obtained from a UCMD patient and on a fragment of the piriformis tendon obtained from a BM patient and compared the morphology with biopsies obtained from identical tendons of healthy subjects. Normal piriformis and pedidium tendons showed quite similar morphological features, consisting of scattered tenocytes with long cellular processes and well packed collagen fibrils oriented parallel to the major axis of the tendon ( Figure 5A). The tendon matrix of both normal tendons was mainly constituted by collagen fibrils of different diameters and few elastin-oxytalan fibers ( Figure S1D). The analysis of collagen fibril diameter revealed a bimodal distribution, with fibrils ranging between 20 and 170 nm, and a shift toward large fibers in pedidium ( Figure 5C,D). The analysis of the UCMD and BM tendon biopsies showed tenocytes with reduced cell processes and hypercondensed heterochromatin, features of dying cells ( Figure 5B). In addition, cross-sectioned BM and UCMD tendon biopsies displayed alterations of the collagen fibril morphology which showed irregular profiles. Groups of "ragged" fibrils were detected in BM ( Figure 5C), while scattered large "cauliflower-like" fibrils were often detected in the tendon of the UCMD patient ( Figure 5D). These alterations were present both in proximity to the tendon fibroblasts as well as dispersed in the extracellular matrix. The diameter analysis of patients' fibrils revealed minimal changes to the median values when compared with that of the respective healthy tendon ( Figure S1D); however, the frequency of patient fibril distribution was clearly shifted toward smaller diameters (50-100 nm in BM tendon, and 50-120 nm in UCMD tendon), with loss of fibrils having a diameter >140 nm ( Figure 5D). Discussion Patients with collagen VI-related myopathies develop contractures and joint hyperlaxity, which have a significant impact on the patient's quality-of-life. Identifying the molecular cause(s) of contractures will help in planning therapeutic strategies in collagen VI-related myopathies. Discussion Patients with collagen VI-related myopathies develop contractures and joint hyperlaxity, which have a significant impact on the patient's quality-of-life. Identifying the molecular cause(s) of contractures will help in planning therapeutic strategies in collagen VI-related myopathies. Emerging theories point to a tendon dysfunction as the major cause for the onset of contractures [16,27,28]. We had the opportunity to obtain tendon tissue fragments from a BM patient and UCMD patient who received surgical treatment. We found extracellular matrix defects in the tendon biopsy and derived fibroblast cultures, consisting in morphological alterations in the tendon matrix associated with increased active MMP2, and defective cell polarization in vitro. By immunofluorescence we found that collagen VI organization was altered in tendon cultures from both BM and UCMD patients. In fact, in patients' cells collagen VI formed large aggregates, scattered in the ECM, and apparently was not associated with the cell surface. Rotary shadowing analysis confirmed the immunofluorescence pattern, revealing that collagen VI aggregates consisted of tangled microfilaments scarcely connected with the cell processes. The presence of aggregates is widely reported in the ECM of skin fibroblasts and skeletal muscle of UCMD patients [29,30]. Similar alterations were also reported in the matrix of a UCMD patient carrying a homozygous mutation in COL6A2 [16]. The formation of protein aggregates is consistent with the effect of UCMD mutations on collagen VI assembly, which may affect both intracellular and extracellular steps. However, the presence of collagen VI aggregates in the BM tendon fibroblast culture was an unexpected finding, since the analysis of a muscle culture from the same patient showed a normal organization of collagen VI (patient BM2 in [29]). On the basis of previous studies performed on skin and skeletal muscle cultures, BM and UCMD mutations were reported to have a different impact on collagen VI expression pattern; in fact, while UCMD mutations cause collagen VI absence/reduced expression and aggregates, BM mutations induce subtle changes of protein organization, mainly detectable by rotary shadowing and electron microscopy analysis [31,32]. Thus, the severity of collagen VI defects reported here in BM tendon fibroblast cultures may suggest that collagen VI mutations have a differential impact on collagen VI organization depending on the tissue; although, we cannot exclude the involvement of additional regulatory mechanisms. We previously found that the transmembrane proteoglycan NG2, encoded by the CSPG4 gene plays a regulatory role in collagen VI organization within the pericellular matrix of normal tendon [16]. In normal proliferating tendon fibroblasts collagen VI showed an early association with the cell membrane in areas expressing NG2 proteoglycan. In UCMD cells, we found a clear reduction of NG2 by Western blot analysis which correlated with a dramatic reduction of the co-localization rate with collagen VI, as indicated by the k2 coefficient. The distribution of NG2 was also altered in the BM culture, although the protein level was not apparently affected, as indicated by Western blot analysis. RT-PCR analysis of CSPG4 did not reveal any defects at the transcriptional level in both BM and UCMD cells, suggesting that NG2 defects may be related to post-transcriptional events. The expression of NG2 was reported to be altered in the muscle of UCMD patients and in a Col6a1-/murine model [27]. We recently found that NG2 was also reduced in tendon cultures of a UCMD patient [16]. These data indicate that changes in collagen VI expression also cause a parallel change in NG2 expression. It is possible that membrane-associated collagen VI might protect NG2 from proteolytic degradation. We previously reported that the binding of collagen VI to NG2 is essential for the orientation of tendon fibroblasts migration in vitro. We found that collagen VI-NG2 co-localized at the trailing edge of migrating cells, providing an anchorage to the substrate [16]. In order to define the impact of collagen VI mutations on cell migration, we subjected BM and UCMD cultures to a scratch wound assay in vitro. It is interesting to note that collagen VI was almost absent at the trailing edge of migrating cells and correlated with a reduced expression of NG2 proteoglycan. In addition, the number of incorrectly oriented cells was markedly increased in UCMD cultures, and, to a lesser extent, in BM cells. Tracking analysis of migrating cells showed that during migration UCMD and BM cells displayed a random trajectory. These data further support our hypothesis that alterations of collagen VI-NG2 axis affect cell orientation during migration. A large number of regulatory molecules interact with collagen VI, including metalloproteinase MMP2 [33]. Interestingly, we found an increase of MMP2 in conditioned medium from both BM and UCMD tendon fibroblasts. RT-PCR analysis of MMP2 mRNA did not show obvious differences with respect to normal cells, pointing to the involvement of post-transcriptional regulatory mechanisms in MMP2 activation. It is interesting to note that α2 chain of collagen VI modulates the activity of MMP2 by sequestering pro-MMP2 in the extracellular matrix, and blocking proteolytic activity [33]. It is conceivable that mutated collagen VI loses this specific function, resulting in the increase of active MMP2. In agreement, a moderate increase in MMP2 was observed in Col6a1-/mice, a collagen VI null model [27], and in the tendon culture of a patient with a COL6A2 mutation [16]. Although the active MMP2 protein was similarly increased in BM and UCMD conditioned media, the gelatinolytic activity was significantly increased only in the UCMD culture, pointing to a differential regulation of protein activity in UCMD with respect to BM. The regulation of MMP2 activity involves several mechanisms, such as transcription, regulation of mRNA half-life, secretion, intraor extracellular localization, enzyme activation, and inhibition by specific and nonspecific cellular protease inhibitors [34]; thus, determining the actors responsible for such a differential regulation in BM and UCMD requires further extensive studies. MMP2 is involved in the initiation and progression of fibril growth and matrix assembly during tendon development [35]. Consistent with the role of MMP2 in collagen fibril organization, we found that the expression of collagen type I and XII were affected in areas of collagen VI accumulation in BM and UCMD tendon fibroblast cultures. Also fibronectin, a pericellular matrix component of tendon cells, displayed changes in its three-dimensional arrangement, similar to what we previously described in collagen VI-deficient fibroblasts [36]. The presence of anomalous aggregates in the matrix may also contribute to the ECM alterations. It was proposed that mutated collagen VI accumulates in the matrix affecting correct binding with extracellular matrix partners. We also detected ECM alterations in the tendon biopsy of both BM and UCMD patients. Ultrastructural analysis revealed alterations of fibril morphology and a significant reduction in the number of large fibrils. These data correlate with fibril abnormalities reported in the skin of UCMD patients [37], in the tendons of collagen VI myopathy mouse models [27,28,38], and in the tendon of a UCMD patient [16]. Altogether, our data indicate that COL6 mutations affect both the in vivo and in vitro organization of the matrix tendon, resulting in defects consistent with dysfunctional fibrillogenesis. Fibril alterations were reported in some forms of Ehlers-Danlos syndrome (EDS) with hypermobile phenotype [39] and in an animal model of EDS with joint phenotype [40], suggesting the involvement of common pathophysiological pathways in this group of connective tissue disorders. Fibril abnormalities have also been considered a consequence of decreased loading [41], disuse, and aging-related sarcopenia [42]. Our data, however, point toward a primary tendon dysfunction in BM and UCMD, as indicated by the presence of alterations in tendon derived cells, demonstrating a causative effect of collagen VI mutation in determining the tendon phenotype. Contractures are common to several forms of muscular dystrophies and age-related sarcopenia and represent a highly debilitating condition which worsens the motor capability and quality of life of affected people. Joint contractures are currently treated with surgery with scant success. Understanding the biologic mechanism of contractures is an area explored very little and by few research teams. The occurrence of contractures in diseases with different genetic origins points to a common pathogenic feature. However, the causative mechanisms of contractures are still unclear. The contractures hitherto considered only as a consequence to the fibro-adipose replacement of the skeletal muscle, with loss of elasticity and elongation, greatly limit the motor function in patients with myopathies. Our report on tendon alterations in collagen VI-related myopathies sheds light on collagen VI-related defects involved in contracture development. Future therapeutic strategies could take advantage of restoring the proper relationship between ECM components and improving tendon cell migration.
2020-02-13T09:24:49.960Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "59814bde279702be10b58b370c65ce6a556af3f2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/9/2/409/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e0ef8ca9a33116e03b08eeb3c6a9bdbdfeabdea", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
4659081
pes2o/s2orc
v3-fos-license
Agent-oriented Architecture for Ubiquitous Computing in Smart Hyperspace Agent-oriented approach is increasingly showing its magic power in a diversity of fields, specifically, ubiquitous computing and smart environment. Meanwhile, it is considered the next creative issue is to interconnect and integrate isolated smart spaces in real world together into a higher level space known as a hyper-space. In this paper, an agent-oriented architecture, which involves the techniques of mobile agents, middle-ware, and embedded artificial intelligence, is proposed. Detailed implementations describe our efforts on the design of terminal device, user interface, agents, and AI computing module to combine two single smart spaces, UbiLab and UbiDorm, into a practical smart hyperspace. Introduction The progressive advances in computer system together with the simultaneous improvement in Wireless Sensor Networks (WSNs) [1] and other related fields, contribute to the ubiquitous computing era, when our physical space will be filled with different kinds of smart devices, which possess the capability of computing and communication.At the same time, pervasive network, which are formed by ubiquitous devices interconnected with each other through wireless communications, internet and other medium, supply services and useful information to us instantly and constantly?Thus, Mark Weiser's ubiquitous computing, "services can be provided to users anytime and anywhere with any devices" [2], which was first propounded in 1991, is approaching.And we name this kind of physical space as smart environment, where the knowledge about its users and the surroundings could all be acquired and applied based on the ubiquitous computing, in order to adapt to users and meet the goals of convenience and efficiency. However, the tendency that a variety of smart devices such as laptops, handhold PDAs, tiny sensor nodes etc. are becoming common facilities in people's daily life, not only offers us with the fundamental platforms, but also raises issues on how to take advantage of these heterogeneous devices to realize ubiquitous computing.Besides, in order to deploy such smart environment, context, which is defined as "any information that can be used to characterize the situation of entities (i.e.whether a person, place or object) that are considered relevant to the interaction between a user and an application, including the user and applications themselves" [3], is of paramount importance.Meanwhile, with the emergence of a number of context-aware systems and smart environment projects, the increasing natural demand on interconnecting and integrating isolated smart spaces in real world together into a higher level space known as a hyperspace, also raises essential difficulties on how to implement context-aware systems in large-scale intelligent environment and how to expand context-awareness to dynamic, open systems. As a feasible solution to those problems, agent-oriented approach provides our research with the key feature of autonomy, collaboration and especially intelligence.Furthermore, a novel architecture with multiagents to realize a smart hyperspace (including two single physical intelligent environments, UbiLab and Ubi-Dorm) is adopted by us. The rest of this paper is organized as follows.Section 2 contains relevant researches on the fields of ubiquitous computing, smart environment and agent-based approaches.Section 3 explains the overview of our paradigm's framework and two essential components, namely, middleware and AI computing model.Section 4 details the design and implementation of our experimental smart hyperspace.Section 5 discusses the results and the analysis of this work.And finally the conclusion and future works are given in Section 6. Ubiquitous Computing Industries as well as academia have advanced lots of ubiquitous projects, over the last two decades.SAFE-RD [4], MARKS [5] and ETS [6] characterize the works carried out by M. Sharmin, S. Ahmed, and S. I. Ahamed in the field of ubiquitous computing, in which security, adaptability, efficiency, middleware, resource discovery and self-healing are mainly investigated.MIT's Oxygen project [7], Carnegie Mellon University's Aura project [8], UC.Berkeley's Endeavour Project [9] also devote their efforts to a number of different aspects in realizing ubiquitous computing (i.e.large-scale computing, QoS, task scheduling, context awareness) in some particular conditions, significantly accelerating the growth of smart spaces. Smart Environment So far, in spite of no existing explicit definition of what smart environment exactly is, massive endeavors towards proposing such prototypes are already ongoing.According to Mark Weiser, smart environment is "a physical world that is richly and invisibly interwoven with sensors, actuators, displays, and computational elements, embedded seamlessly in the everyday objects of our lives, and connected through a continuous network" [10].The Aware Home Research Initiative at Georgia Tech, which is viewed as one of the first living laboratories, aimed at multidisciplinary exploration of emerging technologies and services based in smart home [11].Another relevant research launched by The University of Texas at Arlington which was known as MavHome (Managing an Adaptive Versatile Home) project, focus on the creation of an intelligent and versatile home environment with state-ofthe-art algorithms and protocols used to provide a customized, personal environment to the users of this space [12].Besides, the increased interests of industrial labs in constructing smart environments are evidenced by Microsoft's Easy Living project [13], IBM's BlueEyes project [14], and the Speakeasy project at Xerox PARC [15], etc. Agent-Oriented Solution An agent is a software entity that has some properties of a human such as autonomy, reasoning, learning, and knowledge level communication, etc. [16].Agent-oriented approach, which is highly expected to play a vital role in achieving smart space in high level, has already expressed energetic effects on ubiquitous computing and smart environments in numerous applications.Essex's iDorm project targets to realize the vision of ambient intelligence in health care environments by combining the use of unobtrusive sensors and effectors with intelligent embedded-agents [17].A novel type-2 fuzzy systems based adaptive architecture for agents embedded in ambient intelligent environments, a hierarchical fuzzy genetic multi-agent architecture for building learning mechanism, together with another novel life-long learning approach based on intelligent agents are addressed by Essex's group in [18][19][20], respectively.Other Contemporary researches also cover the following areas: using a neural networks agent based approach to recognize different high level activities [21], systematic and useful methodology to develop agent-based system [22], intelligent agents involving case-based reasoning (CBR) and Bayesian Network [23] etc. Based on these aforementioned technologies and theories, a foundation for our work had been laid.However, dissimilar with the related projects, we introduce the mobile agent to our intelligent hyperspace, in which improved operation efficiency, minimal manual interaction, seamless context exchange between multiple smart spaces, optimal control strategy for a large and dynamic environment are all ensured by us. System Architecture Figure 1 shows the configuration of our experimental smart hyperspace, which involves the UbiLab and Ubi-Dorm.We select laboratory and dormitory to build our prototype, because they are the most common places a researcher's daily activities may cover.In our research, both of them are furnished with many various devices ranging from small ones like RFID and embedded nodes suited wireless sensor networks (WSNs), to middle ones such as PDAs, mobile phones and gateway, and to large ones represented by laptops and PCs.These either mobile or immobile devices are distributed over their local physical space, constituting a smart network, which is capable of obtaining context, computing, and providing services to users.These two physical separate spaces connect with each other by telecommunication net such as GPRS and GSM, Internet, etc.So they can be regarded as the conceptual smart large environment as a whole.As depicted in Figure 2, the architecture for smart hyperspace is a hierarchy of rational agents, which are able to accomplish their specific tasks to meet the overall goal, and a set of concrete functional layers, which cooperate to realize our system.The following details the interpretation of each layer. a) Device Layer (DL): The Device Layer contains the very basic hardware including daily appliances (i.e.fan, light, alarm, etc.), wireless sensor nodes, network hardware such as gateway, and user interface devices involving PCs, PDAs, and mobile phones.This layer takes charge of gathering context directly from surroundings through sensors installed in tiny wireless nodes or from user interface devices and other sources, controlling appliances according to the upper layer's commands, exchanging physical bytes throughout heterogeneous networks. b) Logical Interface Layer (LIL): The main task of this Layer, whose crucial part is middleware, is formatting and extracting the data either from the lower layer, DL, or the higher layer, LCM.It is also responsible for managing some necessary information between the agents and users.c) Local Context Management Layer (LCM): This layer takes the responsibility of gathering, storing, and generating useful knowledge to make our experimental prototype operating as a smart entity.Besides, its duties also consist of integrating both contextual information and the message came from Space Interconnection Layer, mining data, making proper decisions, and coordinating the tasks allocated to different agents.In particular, a database and a computing module with sufficient artificial intelligence play the most important role in this layer. d) Space Interconnection Layer (SIL): In order to realize a hyperspace, we lay this layer in the top of our hierarchical framework.Internet, GSM, and some other widespread nets provide basic platforms to this layer. Due to its existence, it is possible to build up the effective and efficient links between two physical separate spaces (or further, among more than two spaces). Middleware Figure 3 shows the architecture of DisWare developed by us [24].This mobile agent-based middleware, which chiefly contains interface for instructions, agent management module, system management module, and interface for network communication, mediates between mobile agents and OS layer (and device layer).It separates applications from the specific design of system.Consequently, the flexibility and availability of the whole system are greatly improved.There are four major parts composing our DisWare. Mobile Agent Mobile agents are programs that can migrate from host to host in a network, at times and to places of their own choosing.The state of the running program is saved, transported to the new host, and restored, allowing the program to continue where it left off [25]. To meet this requirement, our DisWare agent possesses these three components: static code, context storage, and state management as demonstrated in Figure 4. Our DisWare agent runs static code which contains some static functions to fulfill the particular missions.The code size is usually much smaller than data size in mobile agent.Context storage takes charge of storing all kinds of useful context information including sensor data gathered from physical environment, agent ID, program counter, operation pointer and the address of different kinds of static codes etc. Recording the agent's state (running states, migration states etc.) is achieved by state management. Embedded Artificial Intelligence Automation was initially targeted as soon as the concept of pervasive computing and smart spaces first came into our view.It is widely accepted that the development of artificial intelligence (AI) is the key solution to this problem.In our research, we combine the benefits of diverse AI algorithms into our UbiLab and UbiDorm environment.Owing to the DisWare, real-time context gained by hardware devices can be easily obtained by agents.Be-sides the communication between different agents become convenient, and other functions like sensor reconfiguration, dynamic reprogramming, and remote task assignment which are difficult to achieve by some traditional methods are all available.Based on these advantages, we import a data mining module to clarify the useful information from historical records, and a fuzzy module [20] to fuzz the real-time context to reduce the computing complexity.The neural network [21], which has the ability of learning, adapting, predicting, making suitable decisions, is considered as the essential part in AI module, and genetic algorithm [19] is used to tune the neural networks.A rule management is set here to store both the general knowledge and the correct rules.Eventually, a fault tolerance module filters the actions generated by the Decision Module.And then the behaviors which are carried out finally, feed back to the History Module.Figure 5 illustrates how these components collaborate with each other to improve the overall performance in our smart hyperspace. The fuzzy module extracts information by categorizing the real-time context into a set of fuzzy membership functions, so that a simple but effective approach is formed to build models at a certain level of information granularity.Once the agent has extracted the membership functions and the set of rules from the user input data, the fuzzy module has learnt how to fuzz those contexts.The sample rule is as follows: IF Temperature is X 1 (I t ) and Light is X 2 (I l ) and Humidity is X 3 (I h ) THEN O is Y(CF). (1) Where X 1 , X 2 , X 3 are conditions of fuzzy logic memberships function.I t , I l , I h , are the sense data, representing the exact values of temperature, light and humidity respectively.Y is the fuzzy output, and CF is the confidence factor attached to consequent part of this rule.Our neural network is a multi-input multi-output connectionist feed forward architecture with two hidden layers.The conditions about surroundings, current time, and the state of various kinds of home appliances are all considered as inputs, while the outputs are the related com- Where n represents the total amount of outputs. Terminal Device To implement our experimental smart hyperspace, designing terminal devices, which have the first contact with users or surroundings, are expected as the primary step.Figure 6(a) exhibits the UbiCell node with a sensor board installed on it, which was designed by us before especially for such applications to perceive humidity, temperature, and luminance information at one time.Figure 6(b) displays the UbiDot node equipped with pulse, body temperature and blood oxygen sensors. Similarly, it can acquire all the three types of physiological information simultaneously [26].Figure 6(c) expresses the gateway, whose mission is exchanging messages throughout wireless sensor networks, GSM/ GPRS, and Internet.The wireless multimedia sensor node used to capture sound and images is described in Figure 6(d). User Interface In our smart hyperspace, services should be supplied to users conveniently through efficient interfaces on the human interaction devices.Hence, graphical user interfaces (GUIs) are designed to hit this target.Figure 7(a) presents the GUI of real-time monitor on temperature, humidity and luminance [27] in PC.Their values would be updated periodically according to the information the agents transmits, so the changes could be identified vividly.Unlike the PC, even a notebook PC device, devoid of mobility is no longer the problem of some PDA based or smart phone based solutions. Design of Agent The programming framework and strategies involved in JACK Intelligent Agent [28] are consulted here in order to carry out the agent-based solution.The advantages of BDI [28] and feedback structure are mixed with each other in the agent based on our DisWare, which could also be called as DisWare agent.Hence, DisWare agent could perform either some event-driven reactions or aim-oriented processes on its initiative.In detailed implementation, we compile our DisWare agent language code into pure nesC (a programming language for deeply networked systems), and calls the nesC compiler to generate executions for the nodes in wireless sensor networks.And we use C# language to realize the same function in PCs, PDAs or smart phones.Figure 8 describes the components of our DisWare agent [24]. Learning Phase In our work, the learning is achieved through interaction with the actual environment.During the learning phase, every request offered by users, together with the corresponding environment states and other related information captured by the mobile agents will be viewed as an input sample.And whenever the request is received, this neural network with the assistance of other AI techniques discussed before, are trained based on the new sample set. In our implementation, the summation of samples is no more than 2000 due to the limitation of computational resource.Case there are already 2000 samples, which are stored in our system, when the new sample is added, the earliest sample would be deleted from the database simultaneously.Thus, an incremental and lifelong learning phase is formed. Experimental Scenario All of the components mentioned above are involved in our experimental test bed, the UbiLab workplace environment as shown in Figure 9 Figure 10 and Figure 11 show both the sensor layout and the actuator layout in UbiLab and UbiDorm, where six kinds of sensors or devices, three sorts of physiological sensors, and a variety of effectors are involved.Some UbiCell nodes are pre-installed to monitor the environment, while others, which are connected with effectors, take charge of controlling them.DisWare is installed in every single terminal device in order to manage mobile agents.Volunteers in UbiDorm are additional required to wear the UbiDot node to sensor the physiological index.Finally, as an illustration of techniques used in our research, a continuous evaluation on this prototype system, which lasted three weeks, has been conducted by us.During this period, volunteers who didn't take part in the development of our system executed their daily work in our UbiLab in the daytime, and one of them occupied the UbiDorm in the night time.Both the lights, fans, others effectors in laboratory and the home appliance such as air-conditioner were the pre-configured instruments, which could be operated by users in accordance with their own feeling via the interface we designed in all kinds of devices.For instance, when the user felt hot, he would probably turn on the air-conditioner.To achieve a higher level goal, the condition that the user certainly wanted to enter his cool dorm during harsh summer, is under consideration too.So he could turn on the Ubi-Dorm's air-conditioner when he was about to leave for UbiDorm a few minutes later, but still stayed in UbiLab.A more complex situation would be as follow, if the user felt nervous partly because of the high temperature and low level of humidity, consequently, his pulse would become higher than usual.At this time, the user would turn on the air-conditioner and humidifier, and be likely to enjoy some bright music.Subsequently, after the initial monitor phase, our system would try to predict user's action based on the trained embedded artificial intelligent module, and then automate corresponding actuators.Meantime, for users, they were hardly aware of the cycle that new introduced samples brought modification and adaptation to our system every now and then. Results and Analysis Each day in this experimental period, we estimated the performance of this system twice per day (at 9:00AM and 9:00PM), which used our agent-based solution, by inputting 30 simulated data (not including multimedia information), and then identified whether the outputs are correct by human.Meanwhile, in order to prove the availability of learning mechanism using hybrid AI techniques, we disable only Fuzzy Module, only Genetic Algorithm, both Fuzzy Module and Genetic Algorithm, respectively, under the same samples. Figure 12 summarizes the experiment's consequence.The Y-value in Figure 12 presents the quotient which was obtained by the number of outputs match the simulative environmental condition to user's demand dividing by the sum of input data, while the X-value illustrates the experiment time.It is obvious that because of insufficient samples, the first three days' (that is, the initial monitor phase's) execution seems not so ideal.However, with the increment of samples, the precision grows.Besides, Figure 12 also addresses that the AI computing module without Fuzzy module act a little inferior to that of hybrid AI.And if there's no Genetic Algorithm adding intelligence to our system, there are no apparent relationships between the amount of samples and its precision, under what condition, the prototype performed much worse. In addition, we also established the solution with no fuzzy module under the same situation, and then we compared our system with that by counting the overall received packets in every 12 hours.Figure 13 clarifies the result of this comparison.Obviously, the non-fuzzy solution's overload is much higher than that of our full functional system.And during our experimental time both solutions perform stably in the every certain interval. Furthermore, the number of manual interaction with our proposed system was recorded by us each day.Owing to the existence of the predicting and decision module, the interactions were reduced apparently according to Figure 14. Conclusions and Future Work In this paper, we have proposed an agent-oriented architecture for ubiquitous computing, as well as an actual paradigm of smart hyperspace based on this novel structure, which is context-aware, capable of monitoring and providing automation to researchers.It is proved that this novel architecture involving DisWare, AI computing module, multi-agents, a diversity of terminal devices and user interfaces can be put together harmoniously and successfully into practice.And the results also address our propounded hybrid AI techniques perform availably, effectively and efficiently. Our future work experimental program includes the plans about adding more physical spaces which can cover almost each aspect of a person's daily activities such as classroom and car to our paradigm under this architecture.In addition, we also aim to extend the types of both sensors and actuators to complete a full functional smart hyperspace, which really acts a unique test bed for relevant further researches, and provides potential value to commercial activities. Figure 3 . Figure3shows the architecture of DisWare developed by us[24].This mobile agent-based middleware, which chiefly contains interface for instructions, agent management module, system management module, and interface for network communication, mediates between mobile agents and OS layer (and device layer).It separates applications from the specific design of system.Consequently, the flexibility and availability of the whole system are greatly improved.There are four major parts composing our DisWare.a)Instructions Interface (II): This interface provides the instruction set for mobile agent, so that we could realize different types of agents by simply modifying the particular code of agent.b)Agent Management (AM): Agent Resource Management, Agent Transfer Control, and Execution Management constitute the Agent Management module.This module views agent as a certain class, management of agent's resource, dispatch and retraction of agent, management of execution queue etc. as functions in this class.Some characteristics such as modular, encapsulation etc., which exist in object-oriented programming, are also emphasized here to benefit the design of agent.c) System Management (SM): This System Management comprises Network Management, Device Management, and Memory Management.Information man- mands taking charge of controlling corresponding effectors.As soon as the users change the context, the neural network is triggered to enter a new training.Iterations of each training are set at 2000.And the genetic algorithm's experimental setup is as follows:Probability of crossovers: 90%; Probability of mutation: 0.1%; Population: 20; Generation: 2000.Fitness function Figure 7 . Figure 7. Snapshots of GUI in various devices. Figure 7 ( b) indicates the mobile phone wireless application interface.Figure 7(c) shows the initial welcome screen with five choices, while one instance of history query interface is portrayed by Figure 7(d).In our experiment, the GUIs in PC are achieved using Visual C# in Visual Studio 2005.And the GUIs in PDA or smart phone are implemented in win-dows CE.net framework 2.0. (a) and the UbiDorm as represented in Figure9(b) and Figure9(c).Custom power line control automates all the lights, fans, airconditioner, and other appliances such as fire alarm, humidifier, etc. Perception of light, humidity, and temperature, smoke, motion, and switch settings is performed through wireless sensor networks or wireless multimedia sensor networks.Identity check-up was accomplished by RFID techniques and motion sensors as soon as the users entered either UbiLab or UbiDorm, not only to prevent illegal persons from our smart hyperspace, but also to record the context about when the every single valid user reached there and how many authorized experimenters in the UbiLab in any specific time etc.The security of our smart hyperspace is also insured by the smoke sensors and the fire alarm.A gateway fulfills the target of interconnection among isolated physical spaces.And Figure9(d) demonstrates a base node also can integrate the information, and communicate with PCs and other terminal devices. Figure 12 . Figure 12.The precision on test set against experimental time. Figure 13 . Figure 13.The received packets against experimental time. Figure 14 . Figure 14.The number of manual interaction with our system.
2015-12-31T08:38:17.981Z
2010-01-12T00:00:00.000
{ "year": 2010, "sha1": "6327daa77039217446c57b92557a13042694b4f0", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=1154", "oa_status": "HYBRID", "pdf_src": "Crawler", "pdf_hash": "6327daa77039217446c57b92557a13042694b4f0", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
267354751
pes2o/s2orc
v3-fos-license
Global Versus Local Theories of Consciousness and the Consciousness Assessment Issue in Brain Organoids Any attempt at consciousness assessment in organoids requires careful consideration of the theory of consciousness that researchers will rely on when performing this task. In cognitive neuroscience and the clinic, there are tools and theories used to detect and measure consciousness, typically in human beings, but none of them is neither fully consensual nor fit for the biological characteristics of organoids. I discuss the existing attempt relying on the Integrated Information Theory and its models and tools. Then, I revive the distinction between global theories of consciousness and local theories of consciousness as a thought-provoking one for those engaged in the difficult task of adapting models of consciousness to the biological reality of brain organoids. The “micro-consciousness theory” of Semir Zeki is taken as an exploratory path and illustration of a theory defending that minimal networks can support a form of consciousness. I suggest that the skepticism prevailing in the neuroscience community regarding the possibility of organoid consciousness relies on some assumptions related to a globalist account of consciousness and that other accounts are worth exploring at this stage. Introduction Recently, human brain organoids have raised increasing interest from scholars of many fields and a dynamic discussion in bioethics is ongoing.There is a serious concern that these in vitro models of brain development based on innovative methods for threedimensional stem cell culture might deserve a specific moral status [1,2].This would especially be the case if these small stem cell constructs were to develop physiological features of organisms endowed with nervous systems, suggesting that they may be able to feel pain or develop some form of sentience or consciousness.Whether one wants to envision or discard the possibility of conscious brain organoids and whether one wants to acknowledge or dispute its moral relevance, the notion of consciousness is a main pillar of this discussion (even if not the only issue involved [3]).However, consciousness is itself a difficult notion, its nature and definition having been discussed for decades [4,5].As a consequence, the ethical debate surrounding brain organoids is deeply entangled with epistemological uncertainty pertaining to the conceptual underpinnings of the science of consciousness and its empirical endeavor. It has been argued that neuroethics should circumvent this fundamental uncertainty by adhering to a precautionary principle [6].Even if we do not know with certainty at which point brain organoids could become conscious, following some experimental design principles would ensure that the research does not raise any ethically problematic features in the years to come.It has also been proposed to redirect the inquiry to the "what-kind" issue (rather than the "whether or not" issue) in order to rely on more graspable features for ethical assessment [7].These strategies, however, make the epistemological issue even more relevant.The question of whether or not current and future organoids can develop a certain form of consciousness (without presupposing what these different forms of consciousness might be) and how to assess this potentiality in existing biological systems is bound to stay with the field of brain organoid technology for a certain time.Even if it is not for advancing ethical issues, there is a theoretical interest in determining the boundary conditions of consciousness and its potential emergence in artificial entities.Although the methodological and knowledge gap is still wide between the research community on cellular biology and stem cell culture on the one side and the research community on consciousness such as cognitive neuroscience on the other, there will be more and more circulation of ideas and methods in the coming years.The results of this scientific endeavor will, in turn, impact ethics. In this article, I look back at the history of consciousness research to find new perspectives on this contemporary epistemological conundrum.In particular, I suggest the distinction between "global" theories of consciousness and "local" theories of consciousness as a thought-provoking one for those engaged in the difficult task of adapting models of consciousness to the biological reality of brain organoids.The first section introduces the consciousness assessment issue as a general framework and a challenge for any discussion related to the putative consciousness of brain organoids.In the second section, I describe and critically assess the main attempt, so far, at solving the consciousness assessment issue relying on integrated information theory.In the third section, I propose to rely on the distinction between local and global theories of consciousness as a tool to navigate the theoretical landscape, before turning to the analysis of a notable local theory of consciousness, Semir Zeki's theory of microconsciousness, in the fourth section.I conclude by drawing the epistemological and ethical lessons from this theoretical exploration. Detecting consciousness in diverse entities I delineate here what I call the consciousness assessment issue: how to detect the presence or the absence of a possible form of consciousness that could emerge in some brain organoids, assembloids (compounds of organoids), or related technologies.In this sense, consciousness assessment is a scientific problem for which the theoretical and empirical tools are still to be decided.The issue is arising as follows. (i).A microphysiological system (e.g., an organoid made of stem cells in culture differentiating and self-organizing in a three-dimensional complex structure) reaches a certain degree of developmental complexity up to the point that it looks like (hence organ-oid) a brain, a part of the brain, or a subset of parts of the brain.The very nature of this similarity would deserve more consideration, but let's say for the moment that the similarity is related to structural or functional aspects of the nervous system at an early stage of development.(ii).This phenomenon inclines some scientists and ethicists to envision the possibility of some sort of consciousness or mental activity (sentience, experience…) occurring in microphysiological systems of this kind.If this is ever the case, the most sophisticated systems developed in advanced research settings would be potential candidates.This insight is based on several assumptions, notably the very general one according to which the nervous system plays a key role in consciousness emergence as we know it, but whether it implies a commitment to a purely neurocentric view of consciousness [8,9] is a point for discussion.(iii).As a consequence, the research community wants to assess further this potentiality, for the sake of curiosity and practical, legal reasons -ethical discussions on whether to terminate research or to endow organoids with a specific moral status might depend on the result of this assessment.For doing that, researchers have to turn to evidence-based methods and empirical measures for describing the actual processes going on and agree on markers that can be identified by observing and manipulating the laboratory entity in consideration.(iv).Indeed, in cognitive neuroscience and clinical practice, there are tools and theories built to detect and measure consciousness, typically in human beings.Researchers are already turning to these tools and theories to discuss the potentiality of consciousness emergence [10].There are at least two difficulties then: none of these tools and theories is fully consensual and none of them is tailored for the new entity of interest (especially in terms of empirical validation and practical measurement, I will discuss these points below).(v).The scientific problem of consciousness assessment then states: How to develop a device, akin to a measurement tool with an unambiguous output, that will help us assess whether or not the new entity of concern is developing a form of consciousness? The consciousness assessment issue in brain organoids is facing two sources of uncertainty.On the one hand, the field of consciousness studies is itself a field of debate with many competing theories.If one is to select a theory of consciousness to assess brain organoids, the number of theories available in the field and the lack of consensus regarding their relevance are certainly confusing [11,12].Furthermore, with the exception of some notable efforts [13], there are few signs of advancement towards a possible resolution of the debate which lacks a common nomenclature and fight over the interpretation of the available data.On the other hand, organoids are novel entities and thus pose a specific challenge to scientists studying them. Analogy is a cognitive strategy commonly employed by scientists when confronted with a novel object and when the methodology to deal with this novel object is not established [14,15].Drawing an analogy with a situation or an object with which the research is more familiar, or adapting an established methodology or tool from a related field of research, is a way to cope with the fundamental uncertainty imposed by novelty. When mentioning the consciousness assessment issue in cerebral organoids, authors often draw an analogy with the "detection of consciousness" problem in comatose and unresponsive patients [3,10].It is indeed not a small achievement of consciousness studies in the past decades to have forced us to revisit the clinical, ontological, and moral status of patients who were supposed to suffer from a complete loss of consciousness before they were assessed again using new tools and models.Thanks to precision gains in imaging tools and refined protocols to obtain functional images of patients' brains, consciousness researchers have been able to claim that they can, within a certain range of confidence, predict the state of consciousness of patients who are otherwise unable to communicate [16,17]. This achievement can be seen as a culmination of the branch of neuroimaging research referred to as the "reverse inference" problem, in which researchers are looking at biological signals to infer the mental states of the participants [18].Although there is a debate on the validity of this kind of inference, there is no absolute rebuttal to the idea that, in principle, reverse inference is possible [19,20].The science of consciousness starts from subjective reports to investigate the neural correlates of mental events.Based on participants' reports on their conscious experience (or on experimental configurations where the conscious experience of the participant is accessible in some way, e.g., a movie shown to the participant [21]), researchers can infer what kind of brain data or pattern of activation is associated with a given state of consciousness.This is of course a complex investigation that requires a subtle delineation of phenomenological concepts.The inference is easier for mapping the motor or perceptual cortex and is growing more and more complex when it comes to disputed psychological notions [22].Yet at some point -once the science is sufficiently established and correlations are reliable enough -it can be expected that researchers will be able to trust physical measurements in order to make predictions on the conscious state of a system.Reverse inference, in the strict, historical sense, can be seen as a laboratory challenge: there are still conscious participants who can report on their experience to cross-validate the predictions of the model.On the contrary, consciousness detection in comatose patients raises more epistemological and clinical challenges as a jump into the unknown, because there are no other ways to communicate with patients to validate the tool.Consciousness detection in organoids faces an additional difficulty, as the biological systems are far from resembling fully grown human brains on which the current models of consciousness in cognitive neuroscience are based.For instance, functional magnetic resonance imaging tools that play a major role in reverse inference research and that have led to some of the most surprising insights into the consciousness of unresponsive patients [17], have been designed for full-scale animal brains and not for non-vascularized tissue in a dish. The difficulty of assessing consciousness in organoids culminates by combining both sources of uncertainty -the evolving state of the field of consciousness research and the disruptive nature of organoids.Within this perspective, neuroethicists have proposed to rely on several theories of consciousness such as the integrated information theory [3,10], the global neuronal workspace theory [23,24], the temporo-spatial theory of consciousness [24], the higherorder theory [7], or the embodied approach [7,24], among many theories and approaches available.The task is made even more difficult because different theories do not necessarily share the same concepts and definitions, and assessment tools might not even have to rely on one specific theory.However, the work conducted by Lavazza and Massimini [10] and more generally by Lavazza in a series of papers is the only attempt to envision concretely, up to a certain extent, a measurement tool based on one of these theories and it builds from the integrated information theory (IIT). IIT's ambitions IIT is a theory of consciousness developed and continuously refined by neuroscientist Tononi and colleagues since the early 2000s [25][26][27].In a nutshell, consciousness according to IIT is the ability of a physical system to integrate information.The theory lists the properties that characterize conscious states, so-called "axioms".These axioms (the exact number and definition of which depends on the version of the theory) state for instance [28] that subjective experience exists and that a subjective experience is intrinsic (i.e., for a subject of experience), that it is specific (each experience has its own features that make it specific), unitary or integrated (a conscious experience is a unified experience), definite (it is different from other possible experiences), and structured (for instance, a visual experience has generally several features such as shape, color, and motion).Then, the theory identifies under the label "postulates" the corresponding causal properties in the physical substrate instantiating these phenomenal properties.For instance, for a conscious state to be integrated (i.e., one unified experience), each part of the system must be connected with the rest of the system through causal interactions.The level of consciousness is associated with a quantity of information that is integrated in an irreducible manner in a network in which all components have an effect on other components.The theory leans on a mathematical framework interpreting the axioms.This modelling strategy provides a tool to formalize the theory and opens up avenues to empirical applications. IIT builds from a general theory of conscious experience that could be applied to any physical system.To do so, IIT proponents have proposed a measure of "complexity" to assess the fact that the cognitive system is composed of entangled subsystems that share information, and not independent modules.This indicator is identified with an "index of consciousness," labeled Φ, that is intended to measure the degree of consciousness of any system of interest, based on the topology of the network of connections in the system.In this context, IIT has been particularly appealing to assess consciousness in brain organoids. In a landmark article, Lavazza and Massimini [10] hypothesize that Φ can be adapted to assess consciousness potentially emerging in brain organoids.As a proxy for Φ, the authors rely on the "perturbational complexity index" (PCI), which is a measurement of brain complexity proposed by Massimini and colleagues [29].In line with early versions of IIT [30], brain complexity is here understood as the nature of a system that is both functionally specialized and integrated.In a complex system, one small, local change, will impact the state of the system as a whole.PCI is an attempt to measure complexity in this sense, by stimulating a localized part of the brain and assessing the global impact of this local stimulation.The main tool is transcranial magnetic Page 5 of 14 9 Vol.: (0123456789) stimulation (TMS) combined with electroencephalography recording (EEG): when TMS introduces a local perturbation in the nervous system, this perturbation is likely to lead to massive and unpredictable changes in the system if it is integrated (that would be a sign of consciousness), while it will likely lead to small changes in the global activity pattern if the system is modular (the local perturbation would have only local consequences).The methodology has proven reliable for predicting the capacity for consciousness of awake or anesthetized participants and it has produced interesting results in the clinical assessment of unresponsive patients in a vegetative state [31]. Lavazza & Massimini claim that the PCI method could be adapted to organoids provided that we could replace TMS and EEG with more subtle measurement tools.The fact that PCI has been validated on a specific biological and cognitive system and that its tools (TMS and EEG) at tailored to the human brain is not a definite rebuttal: It would require some work to produce an index that puts all cognitive systems on the same scale and the development of new tools would be a technical challenge but the expectation does not seem unrealistic. An objection would be that this kind of index and its measurement procedures are as strong as the theory behind it can be.Most scientific instruments are theory-laden (see, e.g., [32]), that is, they are built and validated following the principles of a particular theory, and they can become obsolete with the theory and we know that, according to the regular course of science, all theories have to evolve or become falsified at some point.An index like Φ is very bound to IIT and the fate of the index is in a way committed to the fate of the theory (if one does not endorse IIT, one won't likely endorse Φ).More importantly, the tool is designed to register only what the theory would consider as conscious.As stated above, IIT relies on a certain number of "axioms" of phenomenal experience that determine the conditions under which a physical system is then an eligible candidate for consciousness.When these axioms are updated, does this mean that the phenomena captured are different?Furthermore, the axioms can be challenged [33].This makes the tool extremely theory-dependent in a dangerous way: if one or several of these axioms do not represent true boundary conditions for conscious states, then the tool would have excluded some conscious states of its potential scope of observation because of the theoretical assumptions behind its design.For instance, if not all conscious states were composed or structured, a tool designed from the assumption that only a composed state is conscious would miss the detection of some conscious states (e.g., simple forms of subjective experience).This concern is especially relevant in the case of brain organoid consciousness assessment, for we have to be cautious not to discard the most alien and dissimilar forms of consciousness [12,34]. However, the perturbational complexity index, while inspired by IIT and introduced as a proxy for Φ, might be compatible with other theories such as the global neuronal workspace theory (GNWT) [35].The history of science has shown that measurement tools can acquire some degree of autonomy once they circulate beyond their theoretical context of emergence (see, e.g., [36]).The reliability of the tool in itself might not be dependent upon the theoretical success of IIT but on its operationalization and validation in the "detection of consciousness" challenge in the clinical context.In the context of a review of consciousness theories, Seth and Bayne state that, "because theories of consciousness are themselves contentious, it seems unlikely that appealing to theory-based considerations could provide the kind of intersubjective validation required for an objective marker of consciousness.Solving the measurement problem thus seems to require a method of validation that is based neither solely on introspection nor on theoretical considerations" [37].In the absence of a consensus on theories, the research community will likely confront the consciousness assessment issue and especially the challenge of developing a measurement tool by referring to a wide array of relevant concepts and models, even if not always consistent and possibly taken out of their theoretical context.The task would be akin to theoretical and experimental physicists developing a "pidgin" (or creole language) to communicate in order to enable the functioning of big instruments, according to the model of "trading zones" elaborated by historian of science Galison [38].In the "trading zone" of consciousness research, one will find ambiguous concepts and various experimental protocols and procedures adapted and negotiated between major stakeholders. When it comes to the consciousness assessment issue in brain organoids, the appeal to tools inspired by IIT is for many good reasons.First, IIT is 9 Page 6 of 14 currently popular in the field of consciousness studies: it has been selected as one of two main theories in an "adversarial collaboration" to confront their empirical predictions [39].This current popularity makes IIT an attractive theory today, although popularity should not be taken as a definitive marker of validation or long-lasting authority.Furthermore, the fact that IIT's ambition is not to be strictly limited to human consciousness and that it wants to be applicable in all physical systems is also a point in favor of its use in unusual contexts such as brain organoids.Then, the measurement index Φ and its counterpart PCI are appealing if the EEG signal could be turned into a reliable "biomarker" of consciousness.Such a marker, if operational for practical measurement, would be a godsend for regulators and bioethicists.According to IIT, Φ could even have the advantage of providing a common measure of consciousness as a natural phenomenon in all kinds of biological and artificial systems, which would mean for instance that we could compare the "consciousness level" of a given brain organoid with the level of consciousness in, say, a fly, a monkey, an X-months old infant, or a locked-in patient (in all these cases, the ethical consequences would be dramatically different).From this, regulators and bioethicists could discuss evidence-based criteria to determine how researchers should behave with the entities of concern: for instance, whether a system for which Φ reaches a certain threshold would deserve to be treated with some respect or on the contrary may be terminated.So many good reasons to adopt a marker that would have all these ideal characteristics (applicable to all kinds of entities and that provides a scale across different levels of consciousness) are also reasons to resist the current proposal and test it against other alternatives.It should also be said that, even if the PCI has made some steps towards clinical validation, IIT does not propose today an empirically validated and uncontested methodology for the measurement of consciousness in human beings, let alone in other, less familiar systems [40]. In the next section, I will broaden the scope of the reflection by referring to the distinction between global and local theories of consciousness and examine how we might use this distinction as a guide to navigating the "trading zone" of consciousness assessment in brain organoids. Global versus local theories of consciousness The distinction between local theories of consciousness and global theories of consciousness is mentioned regularly and more or less formally by actors in the field.For instance, it has been used as a categorization tool in encyclopedia entries presenting a list of theories of consciousness [41].A recent book by Lau elaborates on this distinction to provide a scale along which different theories of consciousness are distributed [42].The main idea behind this distinction is that different theories will propose different neural bases for consciousness (or neural correlates of consciousness, NCC).In global theoretical frameworks, the NCC are extended to large parts of the brain, or even the whole brain, while in local theories the NCC are limited to small areas of the brain. The distinction cannot be quantified and there is no straight line that can be drawn at first sight.There is often no strict delimitation of what "global" means in terms of brain function and the point is not to set a limit to the number of brain areas that should be involved in a network to qualify for local or global.One could ask when an activation starts being global.In a sense, global does not mean that the "whole" brain has to be active for a conscious experience to arise.The nuance is in the contrast: local theories of consciousness look at consciousness as emerging from parts of the nervous system, instead of as the product of a global, widespread pattern.Local theories would put the finger on a specific brain area or a few ones, and consider a strong activity in these areas to be responsible for the emergence of subjective experience.As Lau summarizes: "subjective experiences happen when the right kind of neural activity occurs in the relevant sensory modality… the rest of the brain isn't really critically involved" [42].On the contrary, global theories would insist on the idea that the activation should be broadly distributed to enable the emergence of something akin to consciousness.The main concepts put forward by proponents of global theories rely on the distribution of the process: synchronization of activity, long-distance connections, networks of areas that are anatomically distinct, re-entrant loops… All these concepts suggest that there is not only a critical mass of neurons involved but that the key to consciousness lies in the architecture that puts together distinct parts of the nervous system.According to Lau, this idea traces back to Neuroethics (2024) 17:9 1 3 Page 7 of 14 9 Vol.: (0123456789) Dennett's "fame in the brain" [4] and the global workspace theory by Baars [43] for which specialized and separated information processing modules broadcast information to a central system.Current versions of the global neuronal workspace make consciousness dependent on the existence of long-range connections between many regions of the brain, including the parietal and prefrontal cortex [44].Besides, the distinction does not fit with the distinction between "frontal theories" and "parietal theories" (see, e.g., [45]) opposing for instance GNWT for which the activation of the prefrontal cortex is a necessary condition for the emergence of consciousness and IIT insisting on connections inside the visual cortex and related areas. Even a "parietal" or a "frontal" theory will have some global commitments if it attributes consciousness to large activation patterns.Interestingly, the earliest version of IIT was elaborated from the seminal work of neurobiologist Edelman and its "reentrant dynamic core theory" [25].According to this approach, consciousness in human beings is dependent on reentry processes made possible by the thalamocortical loop.The information is integrated when it is processed in a network composed of both distributed and interdependent brain regions.The reentry phenomenon is the source of global coordination and synchrony in the brain relying on long-distance connections, and this feature gives rise to a unified experience, or the binding of several elements of perception in one perceptual scene.Several "coalitions of neurons" compete and the successive domination of coalitions explains the variety of conscious experiences.The dynamic core theory insists on the fact that consciousness emerges as the information is processed in the entire thalamocortical network, that is, a global feature of the brain, by contrast with theories that search for the locus of consciousness by identifying the brain area responsible for its emergence.Edelman's and Tononi's work converged then on the idea of measuring complexity in a biological system [30].However, while Edelman's theory relied first on certain neurobiological bases, IIT's approach from axioms to their physical bases suggests that there is no commitment to a particular neuroanatomical realization in the context of IIT.The characterization of IIT as a parietal approach comes from the fact that IIT's proponents have identified the "posterior cortical hot zone" [46] as the complex maximizing Φ.However, the idea according to which conscious states are integrated and that this integration corresponds to a network of interconnected structures (a "complex") is in itself referring to a kind of globalist account.For instance according to IIT [27], in the brain, the cortex has the kind of physical features that are required for integrating information, while the cerebellum does not because of its modular composition.With regard to PCI, the localized stimulus that leads only to local perturbations is not regarded as a sign of consciousness, while a signature of consciousness would be the observation of massive consequences of a localized stimulus -in other words, the global consequences of local stimulation. A theory such as Victor Lamme's local recurrency theory [47] would also be interesting because it insists on the temporal dynamics of activation and attributes consciousness to a feedforward wave between the primary visual cortex and temporal areas.While still relatively local compared with GNWT, the concept of recurrence based on connections between different regions refers to more than one specific brain area. In any case, this global/local distinction has to be taken as a landmark or as a scale rather than a systematic classifier.The distinction is interesting with regard to the issue of organoid consciousness.At first sight, it seems more difficult to build in a dish a system capable of global activation than to replicate the local activity of specific brain regions.Brain organoids are definitely not "mini-brains" in the sense of functional equivalents of full human brains, even at a smaller scale.However, one can relatively easily envision small replicas of brain regions that are realistic enough to exhibit some properties of the regions they model.If we suppose that consciousness emerges when these regions are active, even locally, then we will have to assess this possibility.Of note, in the current state of knowledge, this possibility relies on many unknowns because science still needs to provide a better understanding of the structural and functional correlates of consciousness, not only at the neuronal level but also including the role of the body and the environment.On the contrary, if one trusts global theories only, then one will easily discard the possibility of consciousness emergence in organoids in the years to come.Assembloids (compounds of organoids that replicate distinct brain regions or other organs) might then be a source of concern, but this possibility would still be far remote because the critical mass of neurons and the long-distance connections that are required for consciousness in biological settings are still out of reach of stem cell biotechnology.Furthermore, if we build our assessment tools (to detect the possible emergence of consciousness in brain organoids) from global theories, then we might miss potentially interesting phenomena that would emerge at a local scale. In the next section, I will overstate the case on purpose and consider Zeki's theory of minimal consciousness as a clear example of a local theory of consciousness.I do not want to take a stand between global and local theories (nor between IIT and microconsciousness theory).This article explores instead the meaning of these approaches: how they fit with consciousness assessment in organoids and incline us to look at the problem from a different perspective. A theory of micro-consciousnesses The "microconsciousness theory" was proposed by Semir Zeki [48,49], an expert in functional specialization in the visual brain. 1 Zeki's theory of microconsciousness stipulates that several consciousnesses can co-exist in the visual system.According to Zeki, consciousness is not a and unitary phenomenon but involves multiple consciousnesses distributed in distinct processing sites.The visual system is composed of many specialized modules, each exhibiting a sign of partial consciousness, such as consciousness of form, movement, and so on. The theory starts from the fundamental observation, accumulated over decades and species, that the visual system is composed of rather autonomous subsystems each processing separately information related to color, motion, forms, and so on.In the clinic, dissociations have shown that one subsystem can be impaired while others remain functional.Although contemporary debates are not framed that much in these terms, one of the big challenges for the science of consciousness in the 1990s and early 2000s was the "binding problem" [51].While I see a consistent, unified scene (e.g., I see a white cat running from left to right), anatomy and physiology teach us that all these features are processed separately in the visual system (the motion of the cat is encoded in some cortical area, its color in another, etc.) -and the problem makes also sense at a larger scale if we add other sensory modalities.Electrophysiological mapping has shown that distinct neurons are sensitive to orientation, form, color, and that distinct areas are responsible for processing the information related to particular attributes of experience.The binding problem can be formulated in the following way: once all the attributes of the visual world have been delineated in subprocesses that are responsible for, e.g., color/form/movement/location in the brain, how does the nervous system put the pieces together to generate a conscious, unified experience?In this sense, the binding problem equates to the issue of the emergence of consciousness -solving the binding problem would be solving the issue of consciousness.Zeki points to the fact that it is not true by saying that we do not need this kind of binding to have an experience. 2 Binding is not a necessary condition for the emergence of experience: each subsystem has the ability to generate in parallel a microconsciousness of its own (e.g., an experience of color, an experience of movement) and one can experience separately color, motion, form.Hence the theory of "microconsciousness." A motto defended by the microconsciousness theory is that "processing systems are also perceptual systems" [48].In terms of model building, there is no need to multiply functions.There is no need for a perceptual system on top of the visual system that would turn the visual representation into a percept.If there are some preconscious representations, the percept will be built from this, not from something else. 1 Zeki is a neurobiologist in London (UCL) who specialized in vision processing [50].Mainly active during the 1980s-2000s, he is now retired.Most of his expertise was built on electrophysiology of the monkey visual system and he turned in his later years to "neuro-aesthetics" and perception of art. 2 Cognitive neuroscientist Dehaene offers also a nice argument for not equating binding and consciousness on different premises [44].From priming (masking) tasks, we know that consistent representations (i.e., already integrated, unified, where attributes are bounded together) can remain unconscious.For example, chess experts can process the information provided by a subliminal chessboard.That is, we have unconscious representations for which the "binding process" must have already occurred.In other words, binding is neither a necessary (Zeki) nor a sufficient (Dehaene) condition for consciousness.Page 9 of 14 9 Vol.: (0123456789) "We suppose that visual consciousness consists of many, functionally specialized, micro-consciousnesses which are spatially and temporally distributed if they are the result of activity at spatially distributed sites (as in the case of color and motion).This we believe to be the direct consequence of the fact that the several, parallel, multinodal, functionally specialized, and autonomous processing systems are also perceptual ones and that activity at each node of each processing-perceptual system can become perceptually explicit."[48] Two main strands of arguments support this view.The first one is taken from psychophysics.Visual perception does not occur as a synchronous phenomenon: some attributes are processed before others, and therefore they can be perceived before others when experimenters find the right manipulations to let that dissonance come to consciousness."Different processing systems create their corresponding percepts independently and with different delays."[52].Especially, participants can perceive location before color, and color before motion (according to the authors, the result is particularly strong for color before motion [52]).Zeki labels this phenomenon the "asynchrony of visual perception."Over a brief time window, there are several micro-consciousnesses corresponding to different attributes of the visual scene, processed by different subsystems. The second strand of arguments comes from dissociations on neurological patients.In many neurological syndromes, such as agnosia, patients are unable to perceive a global scene, but they are not deprived of all experience.These patients, while unable to combine their experiences into a whole, are often able to "see and understand what the intact nodes of their processing-perceptual systems allow them to see and understand" [48].Thanks to the remaining activity in some subsystems of vision processing, they have residual capacities that allow them to see details of a scene that, for them, does not make sense as a whole.This happens when one sees colors or forms without perceiving and identifying familiar objects.For instance, a patient who could not experience shapes and colors because of a lesion of the primary visual cortex could still experience motion -like a person without lesion would perceive, eyes closed, a shadow moving in front of a source of light (according to the patient's report).This case is different from the classical interpretation of blindsight as a pathology of consciousness where patients are unable to report a feature of a scene (that is, they are not conscious of it) but still behave in certain experimental conditions as if they could process the information unconsciously [53].In the case presented by Zeki, the subjective experience is still existing, but reduced to a minimal aspect, as if the only residual processing area after the lesion were also able to produce a minimal feature of consciousness.As a consequence, Zeki suggests that "activity at any given stage of a processing system can have a conscious correlate" [48].Particular pathways of information processing are responsible for different kinds of subjective experiences.According to Zeki, some patients, affected by specific defects of their visual system, "are capable of a more elementary perceptual experience of the relevant attributes than normals but are nevertheless able to experience something of the relevant attribute" [48], even if their subjective experience does not have the richness of a neurotypical perceptual activity.Such patients are able to "see and experience details of a given attribute without being able to combine the details into a whole and thus experience the whole" [48]. In an overarching article, Zeki introduces a hierarchy between micro-consciousnesses and unified consciousness [49].When the different attributes coming from different processing subsystems are merged, there is a macro-consciousness, or unified consciousness, at a higher level, that enables the emergence of a global picture.These micro-consciousnesses seem erased when integrated into a macro-consciousness -we have in general the impression of perceiving a moving object, not motion plus an object.However, the author insists on the fact that behind this apparent unity of consciousness, there is disunity, many asynchronous components (micro-consciousness), that are part of the experience."The quest for the NCC will remain elusive until we acknowledge that consciousness is not a unity, and that there are instead many consciousnesses that are distributed in time and space" [49].If consciousness is not necessarily unified, because we can be conscious of different aspects at different times, then there are several parts of consciousnesses or snatches of consciousness.In this framework, neither top-down influences nor long-distance connections are required for the emergence of consciousness.9 Page 10 of 14 Vol:. (1234567890) An objection against the micro-consciousness theory is that the processing by subsystems would be a necessary but not sufficient condition for consciousness.A global theorist would hold that "micro-conscious" experiences are not exactly conscious as they become actually conscious only when integrated into a more complex system or broadcasted into a global network.If what is broadcasted is only information about motion, then the subject will be conscious of motion without being conscious of form or color.The debate on sufficient conditions is an empirical question open for debate for which there is scarce, and often disputable, evidence.The micro-consciousness theory is probably not a complete theoretical framework and this article does not want to argue for its validity against other theories of consciousness but more modestly to consider its hermeneutic potential for the consciousness assessment issue in brain organoids. Discussion While there is a great interest in neuroethics for human brain organoids and the possibility of these entities deserving a special moral status, a vast majority of actors in the field, especially stem cell researchers and neuroscientists themselves, do not see consciousness of organoids as a realistic possibility and pressing issue.Most of them judge the emergence of high levels of consciousness in artificial entities as very unlikely given the current state of biotechnology and even discard the option as a fantasy [54].Official reports of academic societies endorse this "nothing to declare" position.For instance, the International Society for Stem Cell Research suggests not only that the prospect of conscious in vitro organoids in the foreseeable future is unrealistic, but that it is professional misconduct to communicate publicly on this line: "This is particularly relevant to brain organoids and humananimal chimeras, where any statements implying human cognitive abilities, human consciousness or self-awareness, as well as phrases or graphical representations suggesting human-like cognitive abilities risks misleading the public and sowing doubts about the legitimate nature of such research" [55], Another recent report states that: "It appears at present that neural organoids have no more moral standing than other in vitro human neural tissues or cultures.As scientists develop significantly more complex organoids, however, the need to make this distinction will need to be revisited regularly" (although what should count as "significantly more complex" is left to interpretation) [56]. Skeptical accounts (regarding the emergence of human-like consciousness in brain organoids) of this sort are grounded in several assumptions.One of them is a rather anthropocentric or neurotypical concept of consciousness as what matters ethically.The fact that "human-like cognitive abilities" are not in sight does not mean that other, different forms of consciousness do not deserve attention.This is something that the consideration of a broad range of theories of consciousness should encourage us to consider.In general, the field of consciousness studies is full of borderline cases and extreme conditions (from neuropathology, animal experiments, complex experimental design with human subjects) that are very intriguing and should incline us to reexamine our expectations regarding what counts as typical or significant.The position that tells us to postpone the ethical and epistemological reflection while "keeping an eye" on the progress of the technology in very broad terms is problematic in the sense that it does not provide monitoring tools and specific signs of concern. The skeptical account would somehow follow this line of reasoning: if we want to look for the emergence of a typical, "human-like" form of consciousness in brain organoids, then we have to look for some kind of global activation which of course will not occur because of the limitations of current organoid technology until organoids are composed of multiple realistic interconnected brain systems, like a human brain.The NASEM report [56] states something along this line when it writes that the status of brain organoids is not different from the status of regular in vitro cell culture until "significantly more complex" organoids are grown.It might be an overinterpretation to refer here to IIT's grounding of consciousness in the "complexity" of a system.One lesson from IIT and its gradual approach might be that a system does not have to be as complex as the human brain to give rise to subjective experience.Less complex systems, but still complex to a certain extent, would have enough "power" to raise concerns -if the Page 11 of 14 9 Vol.: (0123456789) dominant system reaches a certain level of Φ, according to IIT.This stance would be even stronger with the microconsciousness theory, according to which we would not have to look for complexity but for the possibility of replicating "perceptual sites" in vitro.In the microconsciousness theory, experience can emerge from local brain activity.Such a statement would not necessarily be impossible within IIT, although the axioms defining consciousness in this theoretical framework are putting some conditions on what counts as an experience.Depending on how the axioms of IIT are applied, they may impose unnecessary restrictions on the forms of experience that we might want to capture.That would be the case of the axiom of composition posing that all conscious states are structured and composed of several features -in other terms after binding -while microconsciousnesses according to Zeki would occur even at an earlier stage.That would be also the case of the axiom of exclusion, according to which one conscious state "excludes" others so that, if several complexes co-exist in a system, only one maximizing the value of Φ (labelled Φ max ) will be conscious.In Zeki's framework, several microconsciousnesses co-exist all along, and it seems that these micro-consciousnesses are interpenetrating and integrated or erased at a higher level when integrated into a macroconscious state. Raising competing theoretical views on consciousness, even if both views have limitations, has the advantage of questioning the implicit assumptions behind the skeptical account.Human beings typically have two brain hemispheres and see the world as one -and, notably, we are not aware of a boundary between the receptive fields of the primary visual cortex at the border of both hemispheres.The idea that a subsystem responsible for motion detection, color analysis, or shape delineation could give rise to conscious states by themselves is intriguing.What if we try to build organoids that replicate precisely these subsystems?Couldn't they be subjects of an experience, that we could describe and that could correspond to something that we could experience too?Widening the scope of our models of consciousness will benefit both our discussion of ethical concerns and epistemic curiosity. We would then have to assess the ethical implications of the different theoretical scenarios.Which exact features of subjective experience would give rise to ethical consideration and potential moral status?Would a subjective experience staying at a minimal (e.g., perceptual) level be valuable in itself?Sentience, pain, and stress are in general of major concern when it comes to defining the moral status of brain organoids we are interacting with in the laboratory [6,57].In this framework, subjective experience is considered morally significant because the experience has a positive or negative value from the viewpoint of the subject of experience.For instance, pain has a negative value that can be detected by the fact that the organism experiencing pain systematically avoids this kind of experience.In other words, the valence of the experience determines its moral significance for a given subject.However, looking for valenced experiences is already conflating a certain number of features of experience (the perceptual content of the experience, the pain or feelings associated with it, and its interest for the organism) that could be analytically distinguished and, potentially, replicated separately in different technological in vitro systems, which we could label "microconscious organoids," provided the microconsciousness theory is true.In the context of microconscious organoids, one can wonder what would be the moral significance of, for instance, perceptual experience, if it has no valence."Having an experience of blue" or "being a subject capable of having an experience of blue" is definitely not equivalent to "being a subject of suffering," but maybe is it still more than being a cell culture in a dish.Even if it were established that a microconscious organoid is sensitive to a certain range of colors, would that be a sufficient condition to impose some restrictions on the use of this organoid for research?Is the status of "subject of experience" something that has to be protected, even if this experience is only perceptual and does not involve pain? The answers to these questions are not obvious [58] and I cannot explore all their ramifications here.We could however gain from the mobilization of the most local approaches to consciousness, even as a foil, in the following way.If some ethical concern is going to emerge from the potentiality of human brain organoids in the near future, it will not be because of their similarity to a fully-developed, mature human brain, because in vitro models are and will stay very dissimilar to their natural counterparts in many respects [59].The analogy with various animal nervous systems and developing human brains is 9 Page 12 of 14 hazardous as well [34].Thus, starting from the suspicion that even simple systems could acquire a form of microconsciousness whatever the moral significance of this point could be, and then adding the relevant features that would make this experience morally significant is more likely to follow the development of organoid technology.Indeed, a major source of the gain in "complexity" in brain organoid technology relies on the fact that organoids replicating different parts of the brain can be merged in functional assembloids [60,61].Even if a biological system is often more than the sum of its parts, a prospective approach with this framework in mind could at least help us identify in advance which assembloids would require a substantive consciousness assessment exercise and which do not. Acknowledgements I would like to thank the organizers of the research retreat on ELSA of brain organoids in Tübingen for their impulse and continuous efforts.This article was also presented at the "Detecting unusual consciousness" conference at the University of Bonn.I received very insightful comments in both contexts and also from the reviewers of this journal. Funding Open access funding provided by University of Oslo (incl Oslo University Hospital) The author is employed in EU H2020 HYBRIDA Project, Embedding a comprehensive ethical dimension to organoid based research and related technologies (Grant Agreement 101006012).Conflict of interest The author has no competing interests.No ethics approval was required.Maxence Gaillard is the only author of this work.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2024-02-01T16:38:44.058Z
2024-01-27T00:00:00.000
{ "year": 2024, "sha1": "775db99fd775de53d7fc5c0f2752104a8ab893c8", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12152-024-09544-7.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "719e901c255031736a3300c000ed915b1bc19761", "s2fieldsofstudy": [ "Philosophy", "Medicine" ], "extfieldsofstudy": [] }
234791875
pes2o/s2orc
v3-fos-license
Hydrogen sulfide attenuates renal I/R-induced activation of the inflammatory response and apoptosis via regulating Nrf2-mediated NLRP3 signaling pathway inhibition Renal ischemia/reperfusion (I/R) injury can lead to acute renal failure, delayed graft function and graft rejection. Nucleotide-binding oligomerization domain NOD-like receptor containing pyrin domain 3 (NLRP3)-mediated inflammation participates in the development of renal injury. Nrf2 accelerates NLRP3 signaling pathway activation and further regulates the inflammatory response. In addition, hydrogen sulfide serves a protective role in renal injury; however, the detailed underlying mechanism remains poorly understood. The present study investigated whether Nrf2 and NLRP3 pathway participate in hydrogen sulfide-regulated renal I/R-induced activation of the inflammatory response and apoptosis. Wild-type and Nrf2-knockout (KO) mice underwent surgery to induce renal I/R via clamping of the bilateral renal pedicles. A total of 20 mg/kg MCC950 (an NLRP3 inhibitor) was injected intraperitoneally daily for 14 days prior to surgery. Renal tissue and blood were collected from the I/R model mice to analyze NLRP3 and Nrf2 mRNA expression levels, NLRP3, PYD and CARD domain containing, caspase-1, IL-1β, Nrf2 and heme oxygenase 1 protein expression levels, cell apoptosis, the secretion of tumor necrosis factor-α, IL-1β and IL-6 cytokines and renal histopathology and function. Renal I/R activated the NLRP3 and Nrf2 signaling pathways. Conversely, MCC950 treatment inhibited activation of the NLRP3 signaling pathway, and prevented I/R-induced renal injury, release of cytokines and apoptosis in renal I/R model mice. Sodium hydrosulfide (NaHS) not only alleviated upregulation of NLRP3 protein expression levels, but also relieved renal injury, release of cytokines and cell apoptosis induced by renal I/R in wild-type mice, but not in Nrf2-KO mice. NaHS alleviated NLRP3 inflammasome activation, renal injury, the inflammatory response and cell apoptosis via the Nrf2 signaling pathway in renal I/R model mice. Introduction Renal ischemia/reperfusion (I/R) is a leading cause of acute kidney injury (AKI) that typically occurs during renal surgery and predisposes individuals to severe dysfunction in multiple organ systems. I/R-induced cell death and inflammation within renal tissue is associated with complex pathophysiological processes and promotes the decline in renal function, thereby leading to the accumulation of metabolic waste product (1). In addition to ischemia-induced renal tissue damage, reperfusion has also been found to trigger a complex pathophysiological cascade that involves excessive release of free radicals and the accumulation of inflammatory cells, subsequently promoting inflammation and further aggravating local ischemia and cellular damage (2). Subfamilies of the nucleotide-binding domain and leucine-rich repeat containing family serve a key role in the development of the inflammatory response (3). The NLR family pyrin domain containing 3 (NLRP3) inflammasome (previously known as NACHT, LRR and PYD domains-containing protein 3 and cryopirin) is the most characterized inflammasome to date. An increasing number of studies have reported an association between NLRP3 and renal I/R. In AKI, NLRP3 is involved in renal I/R injury (4,5). For example, knockdown of NLRP3 and its adaptor PYD and CARD domain containing (ASC) alleviates I/R-induced renal dysfunction and excessive neutrophil influx into kidney tissue (6). In addition, the knockdown of NLRP3 and/or caspase-1 protects mice from AKI induced by sepsis or lipopolysaccharide (7,8). These findings suggest that the NLPR3 inflammasome may serve a key role in the pathogenesis of renal damage. Nrf2 is an endogenous antioxidant gene, located in the cytosol, which combines with Kelch-like ECH-associated protein 1 (Keap). Once activated, Nrf2 is released from Keap and translocates into the nucleus, where it subsequently binds to antioxidant response elements to initiate the transcription of its downstream antioxidant and cytoprotective target genes, such as superoxide dismutase, catalase, glutathione peroxidase, glutathione-S-transferase isozymes, catalytic and modifier subunits of γ-glutamyl cysteine ligase and NADP(H): Quinone oxidoreductase (9). It was previously reported that Nrf2 is associated with AKI induced by environmental insult, ischemia or xenobiotics (10). Apart from its reported protective role in tissue injury, previous studies have also demonstrated that overexpression of Nrf2 suppresses NLRP3 inflammasome activity and ameliorates tissue injury (11)(12)(13). However, the underlying mechanism of the Nrf2 signaling pathway and its effect on NLRP3 inflammasome activation in renal I/R remains unknown. Hydrogen sulfide (H 2 S) has gained recognition for its ability to regulate mammalian homeostasis and fundamental cell processes, such as autophagy (14). A number of studies have reported that H 2 S serves a protective role against apoptosis, oxidative stress and inflammation in renal injury (15)(16)(17)(18)(19)(20). Nevertheless, the underlying protective mechanism of H 2 S in renal injury induced by I/R remains unknown. Therefore, the present study investigated the expression levels of NLRP3 and its adapter, ASC, in renal I/R model mice, and subsequently determined the effect of Nrf2 on NLRP3 expression. The present study also aimed to determine whether H 2 S protects kidney tissue against renal injury, inflammation and apoptosis via Nrf2-mediated NLRP3 inhibition. Establishment of renal I/R model and treatment groups. A total of 24 male wild-type mice (weight, 20-25 g; age, 6-8 weeks) were obtained from the Laboratory Animal Center of the Academy of Military Medical Sciences (Beijing, China). In addition, nine Nrf2-knockout (KO) male mice (weight, 20-25 g; age, 6-8 weeks) were obtained from Junke Biological Co., Ltd. All animal protocols were approved by the Animal Care and Use Committee of General Hospital of Tianjin Medical University (approval no. IRB2020-DW-04; Tianjin, China), and all efforts were made to minimize animal suffering and the number of animals used. The mice were acclimated to the environment for 3 days prior to the experiment at 22˚C with 40-60% humidity, a 12-h light/dark cycle and free access to water and rodent chow. After mice were anesthetized with intraperitoneal (i.p.) 50 mg/kg sodium pentobarbital, the bilateral renal pedicles were ligated for 30 min using microaneurysm clamps. Subsequently, the microaneurysm clamps were removed. In the control group, surgery was performed using the same process; however, renal pedicles were not ligated. Following surgery, 0.9% sodium chloride solution was used to help mice recover. Mice were then sacrificed following reperfusion for 24 h. A total of 24 wild-type mice were randomly divided into the following six groups: i) Control (Con) group (n=9); ii) I/R group (n=9); iii) I/R+MCC950 group (n=3); and iv) I/R+NaHS group (n=3). A total of nine Nrf2-KO mice were randomly divided into the following three groups: i) Con group (n=3); ii) I/R group (n=3); and iii) I/R+NaHS group (n=3). According to previous research and preliminary experiments (21,22), 20 mg/kg MCC950 (Sigma-Aldrich; Merck KGaA) was injected (i.p.) daily for 14 days prior to surgery in the I/R+MCC950 group. A total of 50 µmol/kg sodium hydrosulfide [NaHS; i.p.; diluted in normal (0.9%) saline; Sigma-Aldrich; Merck KGaA] was injected prior to surgery in the I/R+NaHS group. Once all experimental procedures were complete, mice were sacrificed by cervical dislocation and tissue and blood were collected for subsequent analysis. Reverse transcription-quantitative (RT-q)PCR. Kidney tissue was collected from I/R model mice following 24 h reperfusion to analyze mRNA expression levels. Briefly, total RNA was extracted from tissue using TRIzol ® reagent (cat. no. 15596026; Invitrogen; Thermo Fisher Scientific, Inc.). Total RNA was reverse transcribed into cDNA using a RevertAid First Strand cDNA Synthesis kit (cat. no. K1621; Thermo Scientific™; Thermo Fisher Scientific, Inc.) according to the manufacturer's protocol. qPCR was subsequently performed using SYBR PCR Master Mix (Thermo Fisher Scientific, Inc.) on a 7500 Real-Time PCR system (Applied Biosystems; Thermo Fisher Scientific, Inc.). The thermocycling conditions were as follows: Denaturation at 95˚C for 5 min; 40 cycles at 95˚C for 15 sec; and extension at 60˚C for 20 sec. The gene-specific primer sequences are listed in Table I. The relative expression levels of the target genes were calculated using the 2 -ΔΔCq method (23) and normalized to GAPDH expression levels. Immunohistochemistry (IHC). Kidney tissue was collected from I/R model mice following 24 h reperfusion to determine NLRP3 and Nrf2 expression levels. Briefly, kidney tissue was fixed at room temperature for 48 h in 10% formalin, embedded in paraffin and cut into 5 µm sections. The sections were subsequently blocked with 5% milk at 37˚C for 30 min and incubated with anti-NLRP3 (1:100; cat. no. ab214185; Abcam) or anti-Nrf2 (1:200; cat. no. ab137550; Abcam) primary antibodies overnight. Following primary antibody incubation, the sections were incubated with HRP-labeled anti-rabbit secondary antibody (1:200; cat. no. ab214880; Abcam) at 37˚C for 30 min, developed using diaminobenzidine solution for <3 min and counterstained with 0.5% hematoxylin for 3 min at room temperature. A light microscope was used to visualize IHC staining (magnification, x200; Biorevo BZ-9000; Keyence Corporation). Renal function assay. A total of 2-3 ml blood was collected following centrifugation at 3,000 x g for 10 min at 4˚C from I/R model mice following 24 h reperfusion. The serum was obtained to analyze renal function via ELISA to determine levels of creatinine (cat. no. C011; Nanjing Jiancheng Bioengineering Institute) and blood urea nitrogen (BUN; cat. no. C013; Nanjing Jiancheng Bioengineering Institute), according to the manufacturer's instructions. Kidney injury molecule-1 (KIM-1) levels in serum were measured by ELISA (cat. no. EK0880; Wuhan Boster Biological Technology Co., Ltd.) according to the manufacturer's instructions. Hematoxylin and eosin staining. Kidney tissue was collected from I/R model mice following 24 h reperfusion to evaluate renal histopathological changes. Tissue was fixed at room temperature for 48 h in 10% formalin and then embedded in paraffin. Paraffin-embedded sections were cut into 5-µm thick sections and stained with 0.5% hematoxylin for 5 min and eosin for 3 min at room temperature (25˚C). The presence of hemorrhage, tubular cell necrosis, tubular dilatation and cytoplasmic vacuole formation in the tissue was scored as follows: 0, normal kidney; 1, minimal damage; 2, moderate damage; and 3, severe damage (24). Renal injury scores were calculated in a blinded manner by two researchers under a light microscope (magnification, x200; Biorevo BZ-9000; Keyence Corporation). ELISA analysis of TNF-α, IL-1β and IL-6 levels. Blood was collected (1 ml) from I/R model mice following 24 h reperfusion. The serum was obtained by centrifugation at 3,000 x g for 10 min at 4˚C. The serum levels of TNF-α (cat. no. MTA00b), IL-1β (cat. no. MLB00C) and IL-6 (cat. no. M6000B) were analyzed using commercial ELISA kits (R&D Systems, Inc.), according to the manufacturer's protocol on a microplate reader (cat. no. CA94089; Molecular Devices, LLC). TUNEL staining. Kidney tissue was collected from I/R model mice following 24 h reperfusion to evaluate cell apoptosis using TUNEL staining. Briefly, tissue was fixed at room temperature for 48 h in 10% formalin, embedded in paraffin and cut into 5-µm thick sections. Sections were incubated with TUNEL reagent in a humidified chamber at 37˚C for 60 min in the dark (Roche Diagnostics) and stained with DAPI for 5 min at room temperature. Then, 10 high-power microscope fields were randomly picked and observed in each section with a fluorescence microscope (magnification, x10). Statistical analysis. Data are presented as the mean ± SD of three experiments and were analyzed by GraphPad Prism 5 (GraphPad Software, Inc.). Statistical differences between two groups were analyzed using a paired Student's t-test. Statistical differences between more than three groups were analyzed using one-way ANOVA followed by post hoc Tukey's multiple comparisons test. All data were normally distributed and had an equal variance. P<0.05 was considered to indicate a statistically significant difference. Results MCC950 treatment prevents I/R-induced NLRP3 signaling activation following renal injury in mice. The NLRP3 inflammasome pathway is activated and serves a crucial role in kidney injury (25,26). A previous study also reported that NLRP3 signaling is activated in renal tissue of mice at 24 h post-reperfusion (4). Therefore, 24 h post-reperfusion was selected as the present assessment time point. NLRP3 protein expression levels were upregulated by renal I/R injury in the I/R group compared with the control (Con) group ( Fig. 1A and B). A similar trend was observed in NLRP3 mRNA expression levels (Fig. 1C). Furthermore, the number of NLRP3-positive cells increased in the I/R group compared with the Con group, as determined by IHC staining (Fig. 1D and E). In order to determine the underlying mechanism of NLRP3 inflammasome activation and the effect on the downstream signaling pathway following renal I/R injury, mice were treated with the NLRP3 inhibitor MCC950. Western blotting results demonstrated that I/R upregulated protein expression levels of NLRP3 and its adaptor ASC and promoted maturation from pro-caspase-1 to caspase-1 and pro-IL-1β to and IL-1β in the I/R group compared with the Con group ( Fig. 2A-E). These results indicated that renal I/R may induce activation of the NLRP3 inflammasome pathway and treatment with MCC950 may downregulate NLRP3, ASC, caspase-1 and IL-1β expression levels following renal I/R injury. Effect of the NLRP3 inflammasome on renal injury, inflammation and apoptosis in renal I/R model mice. In order to determine the effect of NLRP3 on renal injury, I/R model mice were treated with the NLRP3 inhibitor MCC950. Renal I/R model mice exhibited significantly increased levels of BUN, serum creatinine and KIM-1, which is a biomarker of renal injury (Fig. 3A-C), indicating successful induction of renal I/R injury. Histological examination of renal tissue from I/R model mice revealed increased histological score, which was evidenced by severe tubular epithelial swelling, tubular dilation and interstitial edema, vacuolar degeneration and loss of the brush border, which suggested that renal I/R may promoted significant renal tissue damage and increased histological score (Fig. 3D and E). Inhibition of NLRP3 by MCC950 treatment significantly decreased I/R-induced increases in BUN, creatinine and KIM-1 levels and decreased histological score, with renal tissue exhibiting fewer severely injured tubules (Fig. 3A-E). In addition, levels of inflammatory factors and apoptotic cells following MCC950 administration were analyzed in renal I/R model mice. Indicators of inflammation, including TNF-α, IL-1β and IL-6, were significantly upregulated in the I/R group compared with the Con group (Fig. 3F-H). Moreover, compared with the Con group, the percentage of apoptotic cells was significantly increased at 24 h post-reperfusion in the I/R group (Fig. 3I and J). Compared with the I/R group, the secretory levels of cytokines, TNF-α, IL-1β and IL-6, and the percentage of apoptotic cells were decreased by MCC950 treatment in the I/R+MCC950 group (Fig. 3F-J). These results indicated that inhibition of the NLRP3 inflammasome may prevent release of inflammatory factors and apoptosis of renal tissue induced by I/R. Nrf2 is essential for renal I/R-induced regulation of NLRP3 inflammasome activity. A previous study revealed that Nrf2 is activated and expression levels of Nrf2 target genes are upregulated in kidney tissue following renal I/R injury (27). Similar findings were obtained in the present research. Nucleic, total protein and mRNA expression levels of Nrf2 were analyzed at 24 h post renal I/R injury. The results demonstrated that, compared with the Con group, the nucleic, total protein and mRNA expression levels of Nrf2 were upregulated in the I/R group (Fig. 4A-D). In addition, IHC analysis found that expression levels of Nrf2 in renal tissue were increased, which was demonstrated by increased number of Nrf2-positive cells observed in the I/R group (Fig. 4E and F). These data suggested that activation of the Nrf2 signaling pathway may initiate a protective response in renal I/R model mice. In order to verify the effect of Nrf2 on the NLRP3 inflammasome pathway in renal I/R, Nrf2-KO mice were used in subsequent experiments. Protein expression levels of Nrf2 and its target gene, HO-1, were significantly downregulated following renal I/R in KO mice compared with wild-type mice (Fig. 5A-C). Conversely, compared with the I/R wild-type group, the expression levels of NLRP3 and its adaptor, ASC, caspase-1 and IL-1β were upregulated in the I/R KO group (Fig. 5A and D-G). These data indicated that renal I/R may induce activation of the NLRP3 inflammasome in Nrf2-KO mice. NaHS alleviates I/R-induced upregulation of NLRP3 protein expression levels in wild-type, but not in Nrf2-KO, mice. NaHS has been demonstrated to exert effects on Nrf2 expression and NLRP3 inflammasome activation in different disease models (28)(29)(30). The present study investigated whether the effect of NaHS on activation of the NLRP3 pathway occurs via the Nrf2 signaling pathway by analyzing expression levels of Nrf2 and NLRP3 inflammasome-associated proteins in wild-type and KO mice. In wild-type mice, compared with the Con group, the expression levels of Nrf2, NLRP3, caspase-1 and IL-1β were upregulated by renal I/R injury. In addition, NaHS treatment further upregulated Nrf2 expression and downregulated NLRP3, caspase-1 and IL-1β expression levels in the I/R+NaHS group compared with the I/R group (Fig. 6A-E). In Nrf2-KO mice, no statistically significant differences were observed in Nrf2, NLRP3, caspase-1 and IL-1β expression levels between the I/R and I/R+NaHS groups (Fig. 6A-E). In the I/R+NaHS group, Nrf2 expression levels were significantly downregulated, whereas NLRP3, caspase-1 and IL-1β expression levels were significantly upregulated in Nrf2-KO mice compared with wild-type mice. These results suggested that NaHS may decrease NLRP3 inflammasome activation via the Nrf2 signaling pathway in renal I/R injury. NaHS alleviates renal injury, inflammation and apoptosis via the Nrf2 signaling pathway in renal I/R model mice. NaHS serves an important role in renal damage in AKI, I/R and other disease models (31). NaHS alleviated renal I/R-induced histopathological damage, which was demonstrated by decreased tubular epithelial swelling, tubular dilation and interstitial edema and kidney histological scores in the I/R+NaHS group compared with the I/R group in wild-type mice (Fig. 7A and B). Indicators of renal function, including creatinine, BUN and KIM-1, were also decreased by NaHS treatment in the I/R+NaHS group compared with the I/R group in wild-type (Fig. 7C-E). However, in Nrf2-KO mice, NaHS was unable to decrease the increased kidney histological score and levels of renal function indicators induced by renal I/R. Furthermore, the percentage of apoptotic cells and release of cytokines, TNF-α, IL-1β and IL-6, were all decreased following NaHS treatment in the I/R+NaHS group compared with the I/R group in wild-type mice (Fig. 7F-J). However, NaHS was unable to exert a protective effect against apoptosis and excessive release of inflammatory factors following reperfusion of renal tissue in the I/R+NaHS group of Nrf2-KO mice compared with I/R+NaHS wild-type mice (Fig. 7F-J). These results suggested that NaHS may alleviate renal injury, inflammation and apoptosis in renal I/R wild-type mice, but not in Nrf2-KO mice. Discussion Inflammation is a key pathogenic processes of AKI induced by I/R (4). The NLRP3 inflammasome and Nrf2 signaling pathway participate in the regulation of inflammation in kidney injury (32). The present study found that AKI induced by I/R activated the NLRP3 inflammasome and its adaptor, ASC, promoted the maturation of pro-caspase-1 and pro-IL-1β and accelerated the excessive release of cytokines and apoptosis, which culminated in severe renal dysfunction. Inhibition of the NLRP3 inflammasome by MCC950 treatment improved renal function, levels of inflammation and apoptosis in renal tissue. In addition to activating NLRP3, renal I/R also activated the Nrf2 signaling pathway. However, the absence of Nrf2 in Nrf2-KO mice led to further upregulation of the NLRP3 inflammasome and its adaptor. NaHS treatment was found to alleviate NLRP3 inflammasome activity via the Nrf2 signaling pathway. Moreover, renal injury, inflammation and apoptosis were decreased by NaHS treatment via Nrf2-mediated inhibition of NLRP3 inflammasome activation. Renal I/R injury is an important clinical problem and primary cause of AKI, which leads to increased risk of developing chronic kidney disease (33). Inflammation is an important pathological feature of ischemic injury and occurs as a consequence of immune cell activation via recognition of pathogen-and damage-associated molecular patterns (34). Moreover, it was previously reported that the NLRP3 inflammasome serves a role in a range of kidney diseases by regulating inflammation, pyroptosis, apoptosis and fibrosis (35). The NLRP3 inflammasome is a protein complex comprising NLRP3, its adapter protein, ASC, and caspase-1. Caspase-1 cleaves pro-IL-1β and pro-IL-18 into mature, activated secretory forms. Renal injury induced by different pathologies, including ischemia (33), activates the inflammasome complex, upregulates NLRP3 expression and promotes the subsequent maturation of pro-IL-1β (36). NLRP3 KO using small interfering RNA in mice or renal tubular epithelial cells exerts a protective effect against renal tissue injury induced by ischemia or cell injury in the absence of glucose (37). The present results indicated that renal I/R induced activation of the NLRP3 inflammasome, while inhibition of NLRP3 using MCC950 significantly downregulated NLRP3 and ASC expression levels and the maturation of pro-caspase-1 and pro-IL-1β in the renal I/R injury model. In addition, NLRP3 inhibition by MCC950 ameliorated renal dysfunction, and decreased histopathological injury and score, excessive release of cytokines and the number of apoptotic cells. These data suggested that NLRP3 activation may serve a key role in renal injury and inhibition of NLRP3 may exert a protective effect against tissue injury and dysfunction induced by renal I/R. Previous studies have reported that Nrf2 participates in the inflammatory response (10)(11)(12). In addition, Nrf2 serves a key anti-inflammatory, anti-oxidative or anti-apoptotic role in ischemic conditions. Nrf2/antioxidant response element signaling pathway activation attenuates inflammation and apoptosis in renal tissue of cyclophosphamide-induced mice (38). Another study demonstrated that Nrf2 activation by sulforaphane inhibits NF-κB signaling pathway activity, thereby relieving inflammation in dystrophic muscle tissue (39). Furthermore, the knockdown of Nrf2 promotes NLRP3 inflammasome activation and leads to IL-1β activation in an ischemia model (11). The present data supported the findings that renal I/R may induce the Nrf2 signaling pathway and expression of its downstream target genes, such as HO-1. Nrf2-KO mice further promoted activation of the NLRP3 inflammasome, which indicated that NLRP3, ASC, caspase-1 and IL-1β expression levels were increased in the renal I/R model. These results suggested that Nrf2 exerted an inhibitory effect on NLRP3 inflammasome activation in renal I/R injury. Previous research has reported that the kidney produces H 2 S (40). Furthermore, H 2 S increases renal blood flow and promotes the clearance function of the kidney, as demonstrated by elevated glomerular filtration rate (41). Moreover, H 2 S not only alleviates inflammation and oxidative stress, but also serves a crucial role in regulating endothelial dysfunction and hypertension (28). H 2 S participates in the regulation of renal-associated disease, such as I/R injury and obstructive and diabetic nephropathy (42), however, the underlying mechanisms remain poorly understood. NaHS has been used as a H 2 S exogenous donor in research (43). In the present study, the delivery of H 2 S was in the form of NaHS. The present results demonstrated that NaHS ameliorated tissue injury, kidney dysfunction, excessive release of cytokines and apoptosis induced by renal I/R. It was previously reported that NaHS prevents microglial activation and inflammation induced by nerve injury via regulating the Nrf2 signaling pathway (44). Consistent with the hypothesis that NaHS regulates inflammation via the Nrf2 signaling pathway, the present results demonstrated that NaHS upregulated Nrf2 expression levels and inhibited expression levels of NLRP3 in renal I/R model mice. Furthermore, Nrf2 KO abolished the regulatory effects of NaHS on Nrf2 and the NLRP3 inflammasome. The present study also investigated the role of Nrf2 on renal injury and function following NaHS treatment; inhibition of Nrf2 not only partially reversed the protective effect of NaHS on kidney injury and dysfunction, but also reversed the inhibitory effect of NaHS on the inflammatory response and apoptosis following renal I/R. These results suggested that Nrf2 may be required for the protective effect of NaHS on AKI induced by I/R, and NaHS may exert its protective role against tissue injury via Nrf2-mediated NLPR3 inflammasome inhibition. In conclusion, I/R-induced renal injury activated the NLRP3 inflammasome and Nrf2 signaling pathway, and inhibition of the NLRP3 inflammasome protected against renal injury. Nrf2 negatively regulated NLRP3 inflammasome activation and NaHS alleviated kidney injury and dysfunction, apoptosis and the inflammatory response via Nrf2-mediated NLRP3 inflammasome inhibition. These results provide novel insight into a potential future target for the treatment of renal ischemic injury.
2021-05-21T06:16:49.304Z
2021-05-17T00:00:00.000
{ "year": 2021, "sha1": "d1b51c1da949c6db573036ba3246fd417cefd0a6", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2021.12157/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "71134cbc0739d243f347f0f466c050246b4fa60c", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
218985798
pes2o/s2orc
v3-fos-license
Glomerular Diseases Associated with Malignancies: Histopathological Pattern and Association with Circulating Autoantibodies Aim: Glomerular diseases (GD) associated with malignancies (AM, GDAM) have unique features, which are important to recognize, in the light of the progress made in cancer therapy. We aimed to describe the clinical and histopathological characteristics of patients with GDAM in relation to the presence of circulating autoantibodies, pointing to potential immune pathogenic pathways connecting cancer to GD. Materials and Methods: The included patients were studied retrospectively on the basis of a kidney biopsy proving GD and a related biopsy to establish the diagnosis of AM. We recorded patients’ demographics, serological and laboratory parameters, histopathological findings, and the type of malignancy, GD, and therapy. Results: In total, 41 patients with GDAM, with a mean age of 63.1 (±10.7) years, were studied. In 28 (68.3%) cases, GD was associated with a solid tumor, and in 13 (31.7%) patients with a lymphoid malignancy. The most frequent histopathological pattern was membranous nephropathy (43.9%). Overall, at the time of GD diagnosis, 17% of the patients were positive for antinuclear antibodies (ANA), and 12.2% for antineutrophil cytoplasmic autoantibodies (ANCA), all against myeloperoxidase (MPO). In addition, 93.3% of the patients who had membranous nephropathy were negative for transmembrane glycoprotein M-type phospholipase A2 receptor (PLA2R) antibody. Sixteen patients (39.0%) presented with acute nephritic syndrome, of whom five (31.25%) developed rapidly progressive glomerulonephritis. In a mean follow-up time of 36.1 (±28.3) months, nine (21.95%) patients ended up with end-stage kidney disease, and eight (19.5%) died. Conclusion: We found that 3.2% of patients who underwent a native kidney biopsy in our institution during the past decade, for any reason, were identified as having some type of GD associated with a malignancy. Serology indicated a significant presence of ANA or MPO-ANCA antibodies in patients with nephritic syndrome and the absence of PLA2R antibodies in patients with membranous nephropathy. Introduction Glomerular diseases (GD) represent certain patterns of injury of the glomeruli, which may be associated with an inherited or acquired disorder and manifest with various clinical pictures and grades of severity, from asymptomatic urinary abnormalities to acute kidney insufficiency. Since the glomerulus and its surrounding Bowman's capsule constitute the basic filtration unit of the kidney, long-standing or aggressive disease causing glomerular changes may result in irreversible kidney damage, chronic renal failure, and end-stage kidney disease. Genetic and environmental factors have been implicated in the pathogenesis of GDs, including infections, medications, and malignancies. GD associated with malignancies (GDAM) represent a rare, secondary form of glomerular lesion and a complication of cancer, which remains a challenge for both nephrologists and oncologists. They are not directly related to the tumor burden, invasion, or metastasis but are assumed to be caused by tumor cell products, such as hormones, growth factors, cytokines, and tumor antigens [1]. Recognition of GDAM is clinically crucial for several reasons. First, if the neoplasm is not known, subsequent detection of an undiagnosed malignancy could be life-saving. Second, if GD is mistaken as idiopathic GD, it can lead to unsuccessful and possibly dangerous therapies. Finally, the pathogenic mechanisms of many glomerular lesions seem to be related to the altered immune responses associated with malignancies and, thus, may facilitate the identification of biomarkers and the investigation of the pathology [2]. The pathogenesis of each type of GDAM is considered to be related to the nature of the respective neoplasm, and therefore, GD associated with solid tumors and lymphoproliferative disorders develop differently. Potential pathogenic mechanisms include in situ formation of immune complexes, with antibodies targeting a tumor antigen localized in the glomeruli, trapping of circulating immune complexes in the glomerular capillaries, and involvement of external factors, such as oncogenic viruses and/or altered immune function [3]. Estimations on the frequency of GDAM are confounded by several factors, including the more aggressive screening for cancer in patients with nephrotic syndrome than in those with nephritic syndrome and the fact that a significant proportion of patients may not present malignancy-related symptoms at the discovery of GD [4]. Besides, certain GDs, such as membranous nephropathy (MN) and pauci-immune glomerulonephritis (PI-GN), occur more often in the elderly, as does cancer, while many of the agents used in the treatment of GD are potentially oncogenic [5]. The management of patients with GDAM is targeted at the primary cause, i.e., the neoplasm, and requires a multidisciplinary approach to monitor both cancer and GD. The aim of this study was to describe the patterns of renal histopathology in patients with GDAM in relation to serological and clinical features and the type of malignancy. The detection of circulating autoantibodies in serum might also indicate potential immunopathogenic pathways connecting these cancer and GD. Participants and Inclusion Criteria We retrospectively reviewed the medical records of all patients who were diagnosed with any type of GDAM in the period 2008-2018 in our hospital. GDAM might have been identified simultaneously with cancer, i.e., during the same admission workup, before, or after the diagnosis of cancer. Participants had to meet the following criteria: (i) histopathological diagnosis of GD in a native kidney biopsy, (ii) GD associated with active and biopsy-proven malignancy (based on a biopsy of the related lesion for solid tumors or a lymph node and/or bone marrow aspiration for lymphoid malignancies), (iii) diagnosis of the malignancy preceding or following the diagnosis of GD, typically within the same year, with the perquisite that, if the diagnosis of the malignancy followed the diagnosis of the GD, the patient should have not received immunosuppressive therapy for GD. Conversely, if the diagnosis of the malignancy preceded the diagnosis of GD, the patient should not have received chemotherapy or immunotherapy. Exclusion criteria: (i) patients with multiple myeloma and other plasma cell dyscrasia-induced GD, (ii) patients with a history of GD, who were diagnosed with cancer after treatment with immunosuppressants, (iii) patients who received the diagnosis of GD after the diagnosis of malignancy and had received chemotherapy and/or immunotherapy. Patients' demographics, biochemistry indexes including serum creatinine and the corresponding estimated glomerular filtration rate (eGFR), 24 h urine protein excretion, findings provided by microscopic urine analysis, hematological and serological tests (antinuclear antibodies (ANA), anti-double-stranded DNA (dsDNA), complement measurements (C3, C4), antineutrophil cytoplasmic autoantibodies (ANCA), anti-glomerular basement membrane autoantibodies, rheumatic factor at the time of kidney biopsy and thereafter every 3 months or by clinical indication were recorded. Histopathology findings were recorded, along with the extra-renal manifestations, therapies given for GDAM, and related responses. The main target of GD management was tumor resection and/or chemotherapy. When GDAM was manifesting as rapidly progressive glomerulonephritis (RPGN), with or without life-threatening symptoms (i.e., alveolar hemorrhage), a short course of immunosuppression was administered, including intravenous cyclophosphamide (500-750 mg/m 2 ) with pulses of glucocorticoids, and/or plasma exchange. When used, plasma exchange was initiated for a total of 7 times for 2 weeks, with approximately 1 time the predicted plasma volume (estimated by the following formula: [0.065 × body weight (kg)] × [1 − hematocrit]) per session, using freshly frozen plasma combined with human albumin as the replacement solution. Likewise, for patients with severe nephrotic syndrome resistant to diuretics, who might receive cyclosporine, in a dose of 3 mg/kg of body weight, C 0 and C 2 levels were determined every three months. Outcomes of interest included remission, end-stage kidney disease (ESKD), and death. Remission of GDAM was defined (i) for patients with nephritic syndrome and/or RPGN, as the improvement or stabilization of renal function combined with the resolution of glomerular hematuria and the cessation of hemodialysis and (ii) for patients with nephrotic syndrome, as the decrease in 24 h proteinuria, sustained and >50% of the initial measurement, which remained below the nephrotic range, combined with the disappearance of signs or symptoms of edema. The need of chronic dialysis was defined as ESKD. Estimation of GFR was done using the Modification of Diet in Renal Disease formula [6]. Renal Histopathology Methods Formalin-fixed and paraffin-embedded tissue sections were prepared for evaluation. Thirteen sections per paraffin block were obtained for light microscopy examination. Three eosin/hematoxylin-stained slides, as well as Periodic Acid Schiff (PAS), Silver, Masson and Congo-Red histochemical stains were evaluated in each case. Immunofluorescence examination on frozen tissue for the detection of immunoglobulins, such as IgG, IgA, and IgM, complement components C3 and C1q, and k and l light chains (DAKO FITC, Polyclonal Rabbit 1/50 dilution), was performed in each case. Slides were examined under a NIKON ECLIPSE 80i Immunofluorescence Microscope with a digital camera. In addition, a small part of the renal cortex was kept in glutaraldehyde solution 2.5%. In case there was not a clear-cut diagnosis from the other two aforementioned methods or clinical indications required electron microscopy studies, this specimen was proceeded adequately for electron microscopy examination with a FEI Morgagni 268 Electron Microscope equipped with a digital camera. Description of Patient Population A total of 41 cases with biopsy-proven GDAM were studied, including 28 (68.3%) patients with solid tumors and 13 (31.7%) with lymphoid malignancies. All included patients were Caucasians, with a mean age of 63.3 (±10.7) years at the time of the diagnostic kidney biopsy. There were 25 (60.9%) males. Patients were initially admitted in the internal medicine department or the hematology section, and during the workup, nephrology consultation was requested, due to the discovery of abnormal renal indexes. Characteristics Related to GD Forty-one consecutive patients were identified as having GDAM, accounting for 3.2% of the native kidney biopsies performed in the period 2008-2018. Twenty-four cases (58.5%) presented with new onset of nephrotic syndrome, while eight (33.3%) of them showed also some degree of glomerular hematuria. Sixteen (39.1%) additional patients had newly diagnosed nephritic syndrome, of whom six (37.5%) presented acute glomerulonephritis, six (37.5%) RPGN (Tables 1 and 2), and the remaining variable degrees of glomerular hematuria and proteinuria. The mean 24 h protein excretion was 5.200 (±1.900) mg in patients with nephrotic syndrome and 907.5 (±519.7) mg in patients with nephritic syndrome, while among patients with acute nephritic syndrome, there were two with nephrotic-range proteinuria. Overall, 20 (48.8%) patients presented with some degree of renal dysfunction or experienced acute renal failure within a few weeks after the diagnosis of GDAM. Mean serum creatinine and albumin, for the total cohort, at the time of the diagnostic biopsy, were 2.3 mg/dL (range 0.6-9.6 mg/dL) and 2.9 (±0.7) mg/dL, respectively. Six patients became dialysis-dependent soon after the diagnosis, five with acute nephritic syndrome and one with nephrotic syndrome. One-third of the patients had positive serological findings, including 7/41 (17%) patients who tested positive for ANA antibodies and 5/41 (12.2%) patients who tested positive for ANCA antibodies. Notably, all cases with ANCA antibodies were against myeloperoxidase (MPO). With respect to the clinical picture, 13 (31.7%) patients had extra-renal manifestations, i.e., pulmonary hemorrhage, hemoptysis, purpura, arthralgias, mononeuritis multiplex, and hemorrhagic colitis. Nineteen (46.3%) patients received immunosuppressive therapy for GDAM. Of these, 13 (68.4%) had acute nephritic syndrome with significant renal dysfunction and/or RPGN and dialysis dependence, while the rest had severe nephrotic syndrome, not remitted despite tumor removal or chemotherapy (Tables 1 and 2). Notably, all patients with ANCA antibodies presented with acute glomerulonephritis and/or RPGN. Characteristics Related to the Malignancy The diagnosis of GDΑΜ was established concomitantly with the diagnosis of the malignancy in 14 (34.1%) cases, while in 13 (31.7%), the diagnosis of GDΑΜ preceded the diagnosis of the malignancy, and in 14 (34.1%), it followed the diagnosis of cancer. Among patients with solid tumors (Table 2), the most frequent primary sites of cancer were colon (17.85%), lung (17.85%), breast (14.3%), and prostate (10.7%) (Figure 2), while the most frequent histological type was adenocarcinoma. Patients with lymphoid malignancies (Table 1) included nine cases with non-Hodgkin lymphomas (69.2%), one with Hodgkin disease, and three (23.1%) with leukemia (one acute and two chronic lymphocytic leukemia) (Figure 2). At the time of GDΑΜ diagnosis, eight (19.5%) patients had evident metastatic disease, two in the brain, four in lymph nodes, one in the bladder, and one in the skin. Twenty-one patients (51.2%) underwent surgery, and 30 patients (73.1%) received chemotherapy and/or hormone therapy in addition to surgery or alone. One patient decided not to be treated for the malignancy. Characteristics Related to the Malignancy The diagnosis of GDAM was established concomitantly with the diagnosis of the malignancy in 14 (34.1%) cases, while in 13 (31.7%), the diagnosis of GDAM preceded the diagnosis of the malignancy, and in 14 (34.1%), it followed the diagnosis of cancer. Among patients with solid tumors (Table 2), the most frequent primary sites of cancer were colon (17.85%), lung (17.85%), breast (14.3%), and prostate (10.7%) (Figure 2), while the most frequent histological type was adenocarcinoma. Patients with lymphoid malignancies (Table 1) included nine cases with non-Hodgkin lymphomas (69.2%), one with Hodgkin disease, and three (23.1%) with leukemia (one acute and two chronic lymphocytic leukemia) (Figure 2). At the time of GDAM diagnosis, eight (19.5%) patients had evident metastatic disease, two in the brain, four in lymph nodes, one in the bladder, and one in the skin. Twenty-one patients (51.2%) underwent surgery, and 30 patients (73.1%) received chemotherapy and/or hormone therapy in addition to surgery or alone. One patient decided not to be treated for the malignancy. Patient and Renal Survival Patient survival: During a mean follow-up time of 36.1 (±28.3) months, eight patients (19.5%) died. For four of them, the cause of death was related to metastatic disease, for three to complications linked to therapy, and for one to a cardiovascular event. Two patients were lost in the follow-up. In this series of patient, survival did not differ if GD was documented at the same time as the malignancy, prior, or after it. Outcome of GD: Nine (21.95%) patients ended up in chronic dialysis. Five of them had initially Patient and Renal Survival Patient survival: During a mean follow-up time of 36.1 (±28.3) months, eight patients (19.5%) died. For four of them, the cause of death was related to metastatic disease, for three to complications linked to therapy, and for one to a cardiovascular event. Two patients were lost in the follow-up. In this series of patient, survival did not differ if GD was documented at the same time as the malignancy, prior, or after it. Outcome of GD: Nine (21.95%) patients ended up in chronic dialysis. Five of them had initially presented with severe nephrotic syndrome, which was associated with non-Hodgkin lymphomas in three occasions, acute lymphoblastic leukemia in one occasion, and ovary cancer in the remaining one. Nephrotic syndrome was caused by MN in three (Figure 3) of them and by MCD in one. All three patients experienced prolonged acute renal insufficiency, as a result of the hemodynamic changes, which caused non-resolved acute tubular necrosis, leading to permanent dialysis. All patients, except one, who denied treatment for his lymphoma, received chemotherapy, with no change in the renal function status, while the patient with ovary cancer had a tumor resection as well. One patient, who had recently received the diagnosis of prostate cancer and presented with ANCA-associated RPGN and pulmonary-renal syndrome (Figure 4), remained dialysis-dependent until death, despite immunosuppressive treatment. Three additional patients with nephritic syndrome, caused by MPGN, lupus-like glomerulonephritis, and PI-GN, responded to immunosuppressive therapy and achieved remission but later they ended up in ESKD due to extended, chronic renal injury. Overall, 24 (58.5%) patients achieved remission, in a mean time of 15.7 (±11.25) months. At the end of the observation period, the eGFR of the survivors, who had achieved remission of the GD, was 1.28 (±0.75) mg/dL. Antibodies 2020, 9, x FOR PEER REVIEW 10 of 13 Figure 3. Immunofluorescence image of a kidney biopsy (patient #7, Table 2) showing IgG staining along the glomerular capillary wall, diagnostic of a membranous nephropathy (IgG × 400). Table 2) showing IgG staining along the glomerular capillary wall, diagnostic of a membranous nephropathy (IgG × 400). Figure 3. Immunofluorescence image of a kidney biopsy (patient #7, Table 2) showing IgG staining along the glomerular capillary wall, diagnostic of a membranous nephropathy (IgG × 400). Discussion Among patients with GDAM, in our institution, MN was the most prevalent histopathological diagnosis (43.9%). The proportion of patients with GDAM who had MN accounted for the 14.9% of the total cohort of patients with MN diagnosed during the same period. Most of the patients with nephrotic syndrome due to MN had also a solid tumor. The association of MN with solid tumors is well known [7][8][9][10]. A meta-analysis, which included 785 patients, found that the estimated prevalence of cancer among patients with MN was 10% [11], with lung cancer being the most Discussion Among patients with GDAM, in our institution, MN was the most prevalent histopathological diagnosis (43.9%). The proportion of patients with GDAM who had MN accounted for the 14.9% of the total cohort of patients with MN diagnosed during the same period. Most of the patients with nephrotic syndrome due to MN had also a solid tumor. The association of MN with solid tumors is well known [7][8][9][10]. A meta-analysis, which included 785 patients, found that the estimated prevalence of cancer among patients with MN was 10% [11], with lung cancer being the most common type of malignancy in those patients. Lefaucher et al. reported 240 patients with MN, 24 of whom received a diagnosis of cancer at the time of the diagnostic kidney biopsy or within the first year [4]. In the same study, it was shown that the incidence of cancer in those patients was 10 times higher than in the general population. In agreement with the study by Leeaphon et al., the proportion of MN associated with lymphoid malignancies was not insignificant in our series of patients [11]. Most of our patients achieved remission of the nephrotic syndrome following tumor resection and/or chemotherapy, pointing to a clear correlation between remission of cancer and remission of the nephrotic syndrome, as previously reported [4]. Pathogenetic pathways which have been suggested [3] to underlie the connection of MN and cancer include (i) the formation of autoantibodies against a tumor antigen with analogous immunological properties to those of an antigen residing within the podocyte, which results in in situ immune complex production; (ii) the production of circulating immune complexes by circulating tumor antigens; (iii) the reaction of circulating antibodies with tumor antigens which are stuck in the glomerular membrane [3]. If a detailed cancer screening in patients with newly diagnosed MN is warranted prior to the induction of immunosuppressive therapy remains in question. The cost-benefit issue has been largely facilitated by the discovery of the phospholipase A 2 receptor (PLA 2 R) antibody [12], which is strongly associated with idiopathic MN. Among our patients with MN associated with malignancies, 93.3% of the tested patients were negative for the PLA 2 R antibody. Thus, employment of this test, combined with an overall clinical assessment, is proven very helpful in providing a first filter to identify patients who need a more thorough screening for malignancies. Furthermore, as seen in our study, 12 out of 41 patients had positive serological findings, among which, 5 with lymphoid malignancies and membranoproliferative glomerulonephritis presented ANA, whereas, considering the patients with solid tumors, 5 presented PI-GN, histopathologically associated with MPO-ANCA, 1 PI-GN and ANA, and finally 1 lupus-like histopathologic findings and ANA. PI-GN was exclusively associated with solid tumors, while MPGN was mostly found in patients with lymphoid malignancies. The association of PI-GN with solid tumors has been repeatedly reported [13][14][15][16][17]. Biava et al. reported seven cases of RPGN associated with coexisting malignancies [13]. Several reports, however, highlight the fact that due to the severity of the disease, which may follow a life-threatening course [18], cancer is discovered after the diagnosis of vasculitis, although it runs concurrently [2]. Among six patients with PI-GN in our study, only one was ANCA-negative, while all the remaining patients were MPO-ANCA-positive. Observations in patients with ANCA vasculitides, who had exposure to specific environmental factor, such as drugs or thyroid disease, showed that Perinuclear (P)/MPO-ANCA specificity was more frequent than cytoplasmic (C)/ proteinase (PR3)-ANCA [19,20]. Similarly, among several reports of patients with PI-GN and paraneoplastic syndromes [17,[21][22][23][24][25][26][27][28][29], all but one were found to have P/MPO-ANCA in their circulation. Mechanisms probably implicated in cases with vasculitis associated with tumors include the effect of tumor-associated antigens, antibodies, and products reacting with the capillary walls, inducing inflammation, as well as the direct effects of tumor cells on the endothelium, the potential of polyclonal activation of B lymphocytes and induction of monoclonal immunoglobulin activity, and the formation of antibodies directed toward endothelial antigens [30]. In this regard, clinical observations and experimental evidence steadily indicate that ANCA are pathogenic. The pathogenesis of ANCA vasculitis is considered multifactorial, with renal histopathology of patients with PI-GN showing activated neutrophils present in affected glomeruli, and the number of activated intraglomerular neutrophils correlating with the severity of renal injury and the level of renal dysfunction [31]. In vitro, ANCA can activate cytokine-primed neutrophils, causing an oxidative burst, degranulation, release of inflammatory cytokines, and damage to endothelial cells. Thus, acute vascular inflammation may be induced when resting neutrophils that have ANCA autoantigens sequestered in cytoplasmic granules are exposed to priming factors such as cytokines induced by infection or factors released by complement activation. This, in turn, causes the release of ANCA antigens on the surface of neutrophils and in the microenvironment around them [32]. The optimal therapy for RPGN in the setting of cancer is not known. Immunosuppressive agents have the theoretical attendant risk of accelerating the development of malignancy and its spreading. However, arguments justifying their employment include: (i) rescue therapy for life-threatening-conditions, (ii) restriction of the standard scheme by using low-dose intravenous cyclophosphamide plus glucocorticoids, typically for three months, (iii) use of cyclophosphamide by oncologists for the treatment of certain malignancies [33][34][35]. Thus, oncology consultation combined with individualized approach and management are mandatory for such cases. Limitations of this study include the small number of patients, which is related to the fact that these disorders are particularly rare and their diagnosis can be very difficult, due to delayed onset of symptoms for certain malignancies and the presence of other secondary causes of kidney disease. We included only patients who received both diagnoses either concomitantly or within the same year, and thus, occasional patients, who manifested the GDAM out of this period, may have been missed. The probability of random incidence of two independent events cannot be excluded, but the closer the diagnosis between the two, the lower the probability of concurrency by chance. Yet, the lack of expected remission of GD after tumor resection or treatment was probably related to the fact that malignancies represent systemic diseases with obvious and occasionally not evident metastases. In conclusion, in this series of patients with GDAM, serological findings such as ANA positivity and detection of ANCA antibodies were relatively frequent, implying a potential connection with immunopathogenesis, especially in the case of PI-GN. In contrast, the absence of anti-PLA 2 R antibodies in the vast majority of patients with MN indicates the secondary cause of nephrotic syndrome, i.e., the malignancy, and thus may be very helpful in terms of clinical management to identify those patients with MN who need a more extensive workup for malignancies. Prompt diagnosis and individualized management, in accordance with the specific characteristics of the patient, the malignancy, and GD histopathology, are critical for these patients. Statement of Ethics This is a retrospective, descriptive study, which was performed in compliance with the declaration of Helsinki for published research. No other ethics committee approval was required for this study. This article does not contain any studies with human participants or animals performed by any of the authors Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest. The results presented in this paper have not been published previously in whole or part, except in abstract format.
2020-05-28T09:18:38.923Z
2020-05-25T00:00:00.000
{ "year": 2020, "sha1": "f3a734bf119ac89f4f416df51d85b9c31863d639", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4468/9/2/18/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ffbe985df1b95d03c25dbe0971c8ef1812f25dc7", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235217276
pes2o/s2orc
v3-fos-license
Choosing Interim Sample Sizes in Group Sequential Designs . This manuscript investigates sample sizes for interim analyses in group sequential designs. Traditional group sequential designs (GSD) rely on “information fraction” arguments to define the interim sample sizes. Then, interim maximum likelihood estimators (MLEs) are used to decide whether to stop early or continue the data collection until the next interim analysis. The possibility of early stopping changes the distribution of interim and final MLEs: possible interim decisions on trial stopping excludes some sample space elements. At each interim analysis the distribution of an interim MLE is a mixture of truncated and untruncated distributions. The distributional form of an MLE becomes more and more complicated with each additional interim analysis. Test statistics that are asymptotically normal without a possibly of early stopping, become mixtures of truncated normal distributions under local alternatives. Stage-specific information ratios are equivalent to sample size ratios for independent and identically distributed data. This equivalence is used to justify interim sample sizes in GSDs. Because stage-specific information ratios derived from normally distributed data differ from those derived from non-normally distributed data, the former equivalence is invalid when there is a possibility of early stopping. Tarima and Flournoy [3] have proposed a new GSD where interim sample sizes are determined by a pre-defined sequence of ordered alternative hypotheses, and the calculation of information fractions is not needed. This innovation allows researchers to prescribe interim analyses based on desired power properties. This work compares interim power properties of a classical one-sided three stage Pocock design with a one-sided three stage design driven by three ordered alternatives. Background Influential books on group sequential methods, [1] and [2] rely on the asymptotic normality of test statistics to develop and justify group sequential designs (GSD). In Section 3.1 of [1], authors introduce the joint canonical distribution assumption, which if true, essentially implies that the central limit theorem (CLT) is applicable not just to test statistics calculated using independent data, but also to separate stage-specific data and data collected using non-ancillary interim stopping rules. This assumption allows authors to assume test statistics are approximately normal. Alternatively, the probability model of Brownian motion also implies the joint asymptotic normality of stage-specific test statistics [2]. The possibility of early stopping is informative which makes originally normal test statistics non-normal. This change of distributions changes the information. As shown on page 174-175 of [2] and in [3], the MLE in the presence of possible early stopping does not change, but the distributional form of the test statistic is not normal anymore. Moreover, the impact of early stopping is more profound in that non-normality holds asymptotically [3][4][5], as convergence to a stationary distribution continues to exist. Asymptotic non-normality has repeatedly been found before in other adaptive designs [6][7][8][9][10]. The equivalence between interim information ratios and interim sample size ratios for independent and identically distributed data is used to determine interim sample sizes in GSDs. But interim information ratios derived from from non-normally distributed data differ from those derived from normal data. The possibility of early stopping makes the normality assumption invalid. In this manuscript, GSDs relying on pre-determined fractional sample sizes are referred to as GSD-FSS. Because, as previously suggested, the theoretical justification used to choose sample sizes for interim analyses is not valid, a new approach was suggested in [3] in which interim sample sizes are determined by a sequence of ordered alternative hypotheses (GSD-SOA). Section 2 introduces a three-stage Pocock GSD. Section 3 describes a GSD-SOA and develops two GSD-SOA designs based on different α-spending functions. Section 4 compares stopping probabilities of the designs via Monte-Carlo simulations. Finally, Section 5 concludes the manuscript with a short discussion. A one-sided three-stage Pocock group sequential designs Group sequential designs have been implemented in various statistical software including but not limited to the SEQDESIGN procedure in SAS, an R package "gsdesign" and Cytel's EAST software. All these programming products rely on the same theory and use information fractions to determine sample sizes of interim analyses. Then, to evaluate power properties their software relies Armitage's formula [11] which is a recursive sub-density formula that incorporates the possibility of early stopping at interim analyses. Armitage's algorithm correctly calculates overall statistical power under the alternative hypothesis. Thus, the software provides correct power calculations despite calculating "information fractions" from normal densities. Consider a simple one arm study where a new treatment needs to be tested against a historically established level. Thus, the null hypothesis that the mean difference from historical controls, θ=0, needs to be tested. The alternative hypothesis is defined on a standardized scale (mean divided by a standard deviation): θ=0.1. The use a standardized scale (effect size) eliminates a need to estimate nuisance parameter (standard deviation) from the design problem. To design a three-stage clinical trial with possibility of early efficacy stopping one first chooses an α-spending function. Pocock's α-spending function is a poular choice determined by having the same critical values at all interim analyses. SAS SEQDESIGN syntax to design such as study is proc seqdesign altref=0.1 pss stopprob errspend; OneSidedPocock: design nstages=3 alt=upper method=poc BETA=0.2 ALPHA=0.05 STOP=REJECT; samplesize model=onesamplemean(stddev=1); The following R code using the "gsdesign" package leads to identical sample sizes and critical values: gsDesign(k=3,test.type=1,sfu="Pocock",n.fix=NULL, alpha=0.05,beta=0.2,delta=0.1) Output from this SAS procedure states that the first interim analysis should be performed after n(1)=244 patients, the second is at n (2) =488 if did not stop at stage one, and the third and final analysis is done at n (3) =732 if the study did not stop before. These sample sizes are justified by information fractions 0.3333 at stage k=1, 0.6667 at k=2, and 1.0000 at k=3. At each interim analysis, the test statistic (sample mean multiplied by a square root of the sample size and divided by a sample standard deviation) is compared against the efficacy critical value 1.9922 (c 1 =c 2 =c 3 ): if above, the study is stopped for efficacy, if below, the study continues with additional data collection until next interim or final analysis. The α-spending function is defined by cumulative stopping probabilities 0.0232 at stage k=1, 0.0387 at k=2, and 0.0500 at k=3, under the null. More generally, the SAS output reports the following operational characteristics: ----Stopping Probabilities---- Note that this design is not driven by these stopping probabilities, but is determined by a chosen α-spending function and multiple fractional sample sizes. Monte-Carlo simulation results for this Pocock design are reported in Table 2. In the next section, a pre-determined α-spending function and a sequence of ordered alternatives to be detected with the predetermined stopping probabilities are used to determine interim sample sizes and stage-specific critical values. Group Sequential Design with Interim Sample Sizes Defined by Ordered Alternatives In [4], the sample sizes of interim analyses were chosen to have desired power determined by several ordered alternatives. For a one-sided three-stage design considered in Section 2, they suggested choosing interim sample sizes to secure 80% statistical power at all three alternatives: θ=0.3, θ=0.2, and θ=0.1 regardless of when stopping occurs. They relied on equal stage-specific rejection probabilities (0.0172) under the null hypothesis. Their chosen sample sizes for interim analyses were n (1) =98, n (2) =196, and n (3) =772; and stage specific critical values were c 1 =2.12, c 2 =2.02, and c 3 =2.02. Rejection probabilities and expected sample sizes under various alternatives are reported in Table 1. Note, Table 1 relied on an equal probability of rejecting the null hypothesis at each of three stages stage if θ=0. Let α k denote the stage-specific rejection probability, that is, the probability of rejecting the null hypothesis at analysis k given the study did not stop at stage k-1. Then if α 1 =.0172 and α 2 = 0.0172, the probability to reject by or at stage 2 = 0.0172+(1-0.0172) 0.0172=0.0341. Similarly, if α 3 = 0.0172, then the overall type I error is 0.0172+(1-0.0172)0.0172+(1-0.0172) (1-0.0172)*0.0172=0.0507. Due to rounding, type I error is not exactly 5%, but it is close enough for illustrative purposes. These results are consistent with Monte-Carlo simulations reported in Table 1 under θ=0. This, however, highlights the fact that Pocock's design does not have equal rejection probabilities at each stage: uniform critical values do not translate into equal rejection probabilities. Monte-Carlo Simulation Experiments Each Monte-Carlo simulation study in this section relied on 100,000 random sequences of standard normal random variables. Discussion GSDs are predominantly defined by a triplet of (1) an α-spending function, (2) overall statistical power and (3) fractional sample sizes (FSS), whereas interim stopping probabilities are not directly controlled; but they are determined by the input triplet. The alternative illustrated in this paper is motivated by the recognition in [3] that possibility of early stopping alters finite-sample and asymptotic distribtions of test statistics; and this alteration invalidates the FSS assumption that sample size fractions are equal to information fractions calculated from normal densities. One option is to calculate information measures from the true asymptotic distributions, but this is a computationally intensive proposition and the relationship between the true information and the sample size may not be simple. To avoid FSS as an input for GSDs, researchers can use a sequence of alternative hypotheses, each with a pre-determined stopping probability. In this paper, several Monte-Carlo simulation studies highlight these new GSD SOA designs. Examples demonstrate how to use stopping probabilities as design inputs at alternative hypotheses that are fixed for each interim test. As the clinical research community is familiar with the concept of statistical power, we anticipate that this new design will improve the clinical interpretation of design choices and facilitate the use of GSD in clinical research.
2021-05-28T06:16:58.963Z
2021-05-24T00:00:00.000
{ "year": 2021, "sha1": "a89f8ca9931619fdfe60bdaa9294335265c41910", "oa_license": "CCBYNC", "oa_url": "https://ebooks.iospress.nl/pdf/doi/10.3233/SHTI210043", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b010e59a5568fd0ac1194250f8293774db0f05f6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
151892904
pes2o/s2orc
v3-fos-license
The Prosodic Acquisition Path Hypothesis: Towards explaining variability in L2 acquisition of phonology Assuming that word-prosodic parameters are organized into a hierarchical tree where certain parameters are embedded under others, this paper proposes the Prosodic Acquisition Path Hypothesis (PAPH). The PAPH predicts different levels of difficulty and paths to be followed by L2 (and L1) learners based on the typological properties of their L1 and the L2 they are learning. On the PAPH, L2 acquisition is assumed to be brought along via a process of parameter resetting. During this process, certain parameters are expected to be easier to reset than others, based on such factors as economy, markedness, and robustness of the input, which is reflected in part by their location on the tree of parameters proposed in this paper. Evidence for the proposal comes from previous formal phonological and L1 acquisition literature. The predictions as concerns the learning path are tested through an experiment which examines productions of English-speaking learners of Turkish, thereby involving two languages that are maximally different from each other regarding the location of word-level prominence, as well as how it is assigned. The PAPH is a restrictive (and falsifiable) approach, where the predictions regarding the stages learners go through are constrained both by certain learning principles and by the options made available by UG. Introduction Although there has been ample research on the second language (L2) acquisition of prosody and the relevance of language universals for the language acquisition process, and persistent problems have been demonstrated to exist in the L2 due to transfer of first language (L1) prosodic structures (see e.g.Broselow 1988;Archibald 1993;1994;1998;Eckman & Iverson 1993;Broselow & Park 1995;Hacin-Bhatt & Bhatt 1997;Goad 2002;Steele 2002;Goad & White 2004;2006), not much research has been done on the L2 acquisition of stress in particular.This is despite the fact that stress (or more generally accent) is often the first phenomenon to come to mind when one speaks of formal prosody (see e.g.Hyman 2006;van der Hulst 2014).Previous research on L2 stress is, in fact, limited to a handful of studies, and has focused almost exclusively on L2 English (see e.g.Archibald 1992;1993;1994;Pater 1997;Tremblay 2007).Acquisition of stress, or accent, in the so-called fixed-stress languages, such as French and Turkish, has never been investigated, perhaps due to the expectation that the acquisition task should be too easy in these languages to give insight into learners' abstract generalizations, or for new insights into the language acquisition process in general, given that prominence regularly falls on the same (e.g.final) syllable within a domain.This is despite the fact that there is anecdotal evidence that learners of these languages, with L1s like English, stress syllables in these languages in ways that native speakers of the language hear them with an 'English accent' (Fromkin, Rodman, and Hyams 2010). This paper proposes a path for the L2 (and L1) acquisition of word-level prosody, the Prosodic Acquisition Path Hypothesis (PAPH).In doing so, it focuses on the L2 acquisition of Turkish word-level prominence, although the PAPH is assumed to be applicable to the acquisition of the stress/prominence system of any natural language.The PAPH predicts different levels of difficulty and paths to be followed by L2 learners based on the typological properties of their L1 and the L2 they are learning (an idea regarding language transfer that dates back to Lado 1957), and also on the basis of a hierarchical tree representation of the relationships proposed to hold between prosodic parameters.1 Most parameters, related to the Foot, the domain of stress assignment, are incorporated in the PAPH (see e.g.Dresher & Kaye 1990;Hayes 1995 for an overview of foot-related parameters).Not every one of these parameters is, however, hypothesized to be equally easy to reset; depending on a variety of factors such as their location on the parameter tree and markedness, certain parameters, such as Foot-Type (Trochaic vs. Iambic), are hypothesized to be easier to reset than others, such as Iterativity. L2 acquisition of some of these parameters has already been examined in previous research, most notably by Archibald (1992;1993a;b;1994; see also Pater 1997).Investigating the acquisition of L2 English stress by Polish- (Archibald 1992), Hungarian- (Archibald 1993a) and Spanish-speaking (Archibald 1993b) learners, Archibald argued that both UG principles and transfer of L1 parameter settings, as well as an ability to reset L1 values of parameters -based on appropriate cues and (indirect) negative evidence -were crucial determinants of interlanguage prosodic representations (see Archibald 1994 for a summary).As will be evident later, the current study provides additional evidence for this argument in demonstrating that both principles of UG and L1 transfer are relevant factors in determining the nature of L2 prosodic representations.In addition, the current study argues for a path that will be followed in resetting these parameters, a path that incorporates all stress/footrelated parameters, and unlike Archibald, one that will be followed only on the basis of positive evidence (see e.g.Schwartz &Gubala-Ryzak 1992 andWhite 1992 for comparable arguments from syntax against the role of negative evidence in L2 acquisition). The predictions as concerns the learning path were tested through an experiment which examined productions of English-speaking learners of Turkish, a language with word-final accent.The predictions of the PAPH, however, go beyond the L1 English and L2 Turkish context that is tested in this paper.It has overarching predictions, relevant for the L2 (and L1) acquisition of any language, although this paper forms a starting point with two languages that are maximally different from each other, with English on one hand with its extremely complex trochaic stress system that is the result of specific settings of various parameters, and Turkish on the other hand, with its fixed, word-final accent. I assume that prosodic parameters are hierarchically organized into a tree where some parameters are embedded under others (see Dresher & Kaye 1990 for a similar approach, but without a tree).This predicts, depending on the depth of embedding of a parameter within the tree, a specific learning path for L1 and L2 acquisition, and ensures, as with Dresher & Kaye (1990), that some foot-related parameters are open (i.e.not pre-set to a certain value, e.g.End-Rule is open in an Unbounded grammar) and can stay as such.Along with the current thinking on parameter resetting in L2 acquisition (most notably of syntax), I also assume that once a parameter is activated (i.e.set to one value or another) in an L1 though, it should be impossible to deactivate it (i.e.reopen it) in an L2 where this parameter is not relevant (more on this in Section 3).On the other hand, resetting parameters from one value to another (as long as it does not result in a prosodic constituent being removed from the grammar) is predicted to be possible.That is, although deactivation is not possible, resetting is, on this account.Not all types of resetting, though, are hypothesized to be equally easy: Certain parameters are expected to be easier to reset than others, based on such factors as economy, markedness, and robustness of the input, which, as will be illustrated later, is reflected in part by their location on the tree of parameters proposed in this paper.Resetting parameters with embeddings, which leads to the de-facto deactivation of the parameters that depend on them, is hypothesized to be highly costly. Taken together, the main tenets of the proposal predict, in the case of L1 Englishspeaking learners of L2 Turkish, that before producing fixed, word-final accent, the learners will go through a number of well-defined stages (making their productions more and more similar to the target language), whereby their grammar will be different both from the L1 and the L2, but similar to other natural languages of the world.This is despite the fact that learners come across word-finally prominent words in the target language from the very beginning of the language acquisition process. Thus, at each stage of the language acquisition process, learners produce some word-finally prominent outputs, but also some that are stressed on other syllables (e.g.penultimate and antepenultimate).As such, on the surface, it looks like there is a lot of 'variability' regarding location of stressed syllables, variability that may look completely random at a first look.After all, the purpose is to have word-finally prominent words in the target language given the word-finally prominent input.As I will explain in detail later, however, the PAPH provides a principled explanation for this variability, predicting which syllable within a word will be stressed by a learner at a given stage of L2 acquisition.As such, this paper contributes to the overall debate on the issue of variability in interlanguage grammars, a topic that has recently generated a lot of attention in syntax and morphology, particularly with respect to learners' variable omissions of functional morphology (see e.g.Lardiere 1998a; b; Ionin & Wexler 2003;White 2003b;Ionin, Ko & Wexler 2004 for different accounts of variability).The issue of variability in L2 grammars has, however, received little recent attention in phonology, even though it was the topic of much L2 phonological research as early as mid 1970s (e.g.L. Dickerson 1975;W. Dickerson 1976;Tarone 1980;1983;Tropf 1987;Eckman 1991;Broselow et al. 1998;Hancin-Bhatt 2000;Lombardi 2003;Broselow 2004; see also Major 2001 andEckman 2004 for a summary), especially under the influence of new approaches to variability in linguistics, most notably first with the introduction of 'variable rules' in sociolinguistics (e.g.Labov 1969), and then Optimality Theory 2 (Prince & Smolensky 1993).This is despite the fact that successful phonological explanations have recently been extended to account for variability in morphology and syntax (see e.g.Prosodic Transfer Hypothesis by Goad, White, & Steele 2003;Goad & White 2004;2006).Explaining variability in L2 phonology itself, and especially within the domain of prosody, is at least as crucial, because, as with variability in suppliance of functional morphology, phonological variability in interlanguage grammars is a leading indication of non-native-like performance in a target language, and is persistent even in end-state grammars, often functioning as a leading source of 'foreign accentedness' (see e.g.Major 2001). The remainder of this paper is organized in the following way: Section 2 presents L1 and L2 language background, explaining rules (and parameters) of stress/prominence in English (L1) and Turkish (L2).It then moves on to detail the current proposal, the PAPH, as well as presenting the specific predictions of this proposal for the learning scenario to be tested here.Section 3 presents acquisitional and formal evidence for the PAPH.Section English stress Because of its complexity, English stress has been the topic of much research in the generative tradition, starting with Chomsky & Halle's (1968) Sound Patterns of English (SPE).The metrical theory to be employed here is the one proposed in Liberman (1975) and later developed in Liberman & Prince (1977), Selkirk (1980), Hayes (1981), and Halle & Vergnaud (1987). Before we lay out the mechanisms of stress assignment in English, we start with the following data, modified from Kager (1989: 28) (secondary stress added), which are representative of English nouns.Notice that while the forms in (1a) have antepenultimate stress, the rest (1b, c, d) bear stress on the penult. ( What these data show, as was first noted by Chomsky & Halle (1968), is that primary stress, in English, falls on the antepenult, if present and if the penult is light, and on the penult otherwise. 3In a parametric theory (e.g.Selkirk 1980;1984;Hayes 1981;1995;Prince 1983), these patterns reveal several things about the correct settings of prosodic parameters in English.First of all, the last syllable is always invisible to stress assignment, suggesting that Extrametricality is set to Yes in English; the final syllable of English nouns, as well as (almost) all adjectival suffixes, is invisible to stress assignment (Hayes 1981;1982), although, exceptionally, some words with final heavy syllables, such as políce and raccóon, have final stress (see Halle and Vergnaud 1987 for more on these). (3) Extrametricality: Note that the notion of Extrametricality was first introduced in Liberman & Prince (1977) to account for the exceptional behavior of a set of English suffixes.The version introduced by Hayes applies more broadly to all polysyllabic English nouns. If final syllables are extrametrical, the antepenultimate stress pattern observed in (1a) is evidence that foot construction is right-to-left and feet are left-headed (trochaic): (4) Direction of foot construction: Only under these assumptions can words like América and génesis (both from (1a)) receive a unified treatment.Given only a word like América, right-to-left trochees and left-to-right iambs predict the same surface stress: [A(méri)<ca>] and *[(Amé)ri<ca>] respectively.Given also génesis, on the other hand, it is evident that the analysis should be based on (right-to-left) trochees, i.e. [(géne)<sis>]; analyzing it as involving a left-to-right iamb would violate Hayes' (1995) Priority Clause since there would, then, have to be a degenerate (nonbinary) foot located at the edge where foot construction starts: *[(gé)ne<sis>]. A comparison of the data in (1a) with (1b) and (1c) illustrates, further, that Weight-Sensitivity is set to Yes in English, for heavy syllables, when present, are stressed.In other words, syllables that are heavy, whether through a long vowel as in (1b) or through a coda as in (1c), have to be in the head position of a foot: The Yes setting of Weight-Sensitivity is, then, the reason for the lack of antepenultimate stress in words of the profile in (1b) and (1c).Therefore, no patterns such as *ároma and *éllipsis emerge, for they would have a heavy syllable in the dependent position of a foot: *[(ároʊ)<mə>] and [(ɪ ́lɪp)<sɪs>] respectively (i.e.instead of the attested [ə(róʊ)<mə>] and [ɪ(lɪ ́p)<sɪs>]. Further, that words like aróma and ellípsis do not have initial secondary stress, i.e. that the initial light (L) syllable is left unparsed, is evidence that Foot Binarity is satisfied in English, and is satisfied at the moraic level.That is every foot, in English, must be composed of at least two moras: Foot Binarity: Yes | No (at the moraic level) The alternatives, that feet are binary at the syllabic level or a non-binary degenerate foot at the end of foot construction, would both incorrectly result in initial stress in aroma and ellipsis, with parses like *[(ároʊ)<mə>] and *[(à)(róʊ)<mə>] respectively. Note that in bisyllabic words with a light penult (see e.g.vílla (1d) and léthal (2d)), which are stressed on the penult, satisfaction of Foot Binarity would result in a violation of Extrametricality, e.g.[(σσ)].Satisfying Extrametricality, on the other hand, violates Foot Binarity, i.e. [(σ)<σ>].That is, in order to satisfy a higher-ranking requirement, that every PWd must have at least one syllable that is stressed, a stress pattern emerges in the language which is in conflict with the correct setting of one of these two lower-ranked parameters.For words that end in a final light syllable (e.g.Ánna, vílla), a violation of either Foot Binarity or Extrametricality would equally be sufficient to account for these data, i.e. [(vɪ ́)<lə>] and [(vɪ ́lə)] respectively.Violation of Extrametricality will, however, not be able to account for the initial stress in bisyllabic forms that end in a heavy syllable (e.g.Vénice, cábin), where the (light) penult is stressed.If Extrametricality was exceptionally violated in these cases, final syllables would bear stress, given that Weight-Sensitivity is set to Yes in English, i.e. *[kae(bɪ ́n)].So Foot Binarity is exceptionally violated in bisyllabic words with a light penult, i.e. *[(kaé)<bɪn>].Of course, as mentioned above, it is not possible to know, in bisyllabic words with two light syllables, such as villa, [vɪ ́lə], whether this is due to a violation of Foot Binarity or Extrametricality.Nevertheless, for expository reasons, I will assume here that it is always Foot Binarity that is violated in bisyllabic words with a light penult. Turning back to longer forms in (1) and ( 2), the fact that secondary stress appears in words that are long enough for the creation of more than one binary foot, such as Mìnnesóta and Àpalàchicóla is evidence that the Iterativity parameter is set to Yes in English (see ( 8)), which is illustrated in (9) below: The words in (9) further illustrate that feet are maximally binary in English, i.e. bounded, since patterns with only primary stress such as *[(mɪ ́nəsoʊ)<tə>] and *[(aépəlaetʃəkoʊ)<lə>] are not observed: Finally, as the words in (9) illustrate, End-Rule, in English, is set to Right, as the head of the rightmost foot bears primary stress.In other words, the rightmost foot within the PWd heads the PWd: Verbs behave slightly differently than nouns (and derived adjectives) in English.Though nouns will be the subject of the experiments in this paper, a complete account of English stress must also capture the facts observed in the verbal domain.Consider the words below in (13), slightly modified from Kager (1989: 29) On the surface, the words in (13b) and (13c) seem not to reflect the parameter settings we have illustrated above, for they can even be stressed on their final syllable.As Hayes (1982) has demonstrated, however, all the parameter settings for English verbs are the same as English nouns, with the exception of Extrametricality; for verbs, the final consonant is extrametrical, rather than the whole syllable.4This is illustrated in (14) below: In conclusion, the parameter settings for stress in English can be summarized as follows: (15) a. Boundedness: Yes b.Foot-type: Left-headed (i.e.trochee) c.Iterativity: Yes d.Direction of foot construction: Right-to-left (R-L) e. Extrametricality: Yes (final syllables for nouns, final consonants for verbs) f.Foot Binarity: Yes g.End Rule: Right In other words, English builds iterative, binary, weight-sensitive trochaic feet from right-to-left, ignoring the rightmost syllable (or consonant in the case of verbs), and the head of the rightmost foot bears primary stress, by means of End-Rule/Right. Accent in Turkish Unlike English, the accentual system of Turkish is rather simple (at least on the surface); word-level accent in Turkish falls on the last syllable of prosodic words (e.g.Lees 1961;Lewis 1967;Underhill 1976;Sezer 1983;van der Hulst & van de Weijer 1991;Hayes 1995;Kornfilt 1997;Inkelas & Orgun 1998;2003;Inkelas 1999;Kabak & Vogel 2001;Özçelik 2014;to appear).This is demonstrated in (16) below; notice that irrespective of the rhymal profile of the syllables involved (i.e.whether they have a coda or a long vowel or not), word-level accent falls on the final syllable of the word: Although most Turkish words are finally prominent as illustrated above, the language also has a small set of words that bear non-final accent.When accent is non-final in Turkish, it is commonly referred to as 'exceptional stress' (see e.g.Kaisse 1985;1986;van der Hulst & van de Weijer 1991;Inkelas & Orgun 1995;1998;2003;Kabak & Vogel 2001;Özçelik 2014;to appear).Exceptional stress can involve either roots, as in (19) (which are mostly borrowed words or place names, Sezer 1983;Kornfilt 1997), or some affixes, as demonstrated in (20), most of which are pre-stressing ((20a/b)), with some bisyllabic suffixes which are stressed on their first syllable ((20c) arabá-jla 'car-inst/com' b. bɨrák 'leave' bɨrák-ma 'leave-neg' c. gel 'come' gel-íyor 'come-prog In pursuit of a unified account of the two types of prominence, and without making any recourse to the Foot, Kabak & Vogel (2001) argue that prominence in Turkish falls on the last syllable of PWds, but that some suffixes, such as exceptional stress driving suffixes are outside of the PWd.Özçelik (2013; 2014; to appear) goes one step further, and argues that UG allows for footless languages and that Turkish is such a language.The Turkish grammar, on this account, does not assign foot structure, and thus that in the absence of feet, intonational prominence (and thus, not 'stress') falls on the final syllable of PWds.Pre-stressing and stressed exceptional suffixes such as those in (20) differ however; these are pre-specified with foot edges in the underlying representation and this foot emerges in the surface too because of faithfulness to this information.Therefore, the grammar on this account is unable to parse syllables into feet (i.e. the constraint PARSE-σ is low-ranked), but keeps feet if they are already present as part of a word's lexical representation.As Özçelik (2014; to appear) demonstrates, this proposal receives both formal evidence (see Özçelik 2013; 2014; to appear) and also independent additional evidence from the acoustic correlates of prominence/stress in Turkish: whereas the acoustic correlates of exceptional stress involve both intensity and a sharp rise in F0 (which is a pattern typical of trochaic languages), regular final prominence is only accompanied by a slight F0 rise that is optional (e.g.Konrot 1981;1987;Pycha 2005;Levi 2005).Therefore, given the criteria presented in the phonetic literature (see e.g.Beckman 1986;Ladd 1996;Hualde et al. 2002), and given the lack of greater intensity or duration associated with the prominent syllable in Turkish regular accent, as well as the optionality of pitch rise (in addition to its weak nature), 'regular stress' in Turkish is not stress nor does it involve foot structure; it is rather intonational (footless) prominence (see Özçelik to appear for a detailed argument for this stance, as well as a discussion of the typological implications of such a proposal). Given the discussion above, Turkish words with regular accent/prominence can be represented as in (21).Notice that this representation, unlike the one in (12) for English, has no foot structure, and syllables are immediately dominated by the PWd, with no other constituent in between.Compare this with (22), the representation of a Turkish word with exceptional stress, where the single foot available results from faithfulness to the information in the UR of the exceptional suffix -(j)la, instead of being assigned by the grammar.I investigate, in this paper, the second language (L2) acquisition of Turkish word-level accent by learners whose first language (L1) is English, a language with a well known and a highly complex iterative stress system, where the interaction of several different parameter settings determines the location of word-level stress.As was mentioned in Section 2 above, I assume, along with recent formal research on Turkish, that Turkish is a language whose grammar does not assign foot structure (Özçelik 2013; 2014; to appear), and regular final prominence is assigned by an intonational prominence rule (as with French, see e.g.Beckman 1986;Ladd 1996;Hyman 2014).I also assume, as with most previous literature, that English differs from Turkish in this regard in that it requires every lexical word to be footed.Given these differences between the two languages, and given a specific learning path for prosody to be proposed in this section, there are certain predictions for English-speaking learners of L2 Turkish, if one views the initial state of L2 acquisition as that of the L1 settings of parameters, as with the Full Transfer Full Access (FTFA) Hypothesis (e.g.White 1989b;Schwartz & Sprouse 1994;1996). Prosodic Acquisition Path Hypothesis The Prosodic Acquisition Path Hypothesis (PAPH) follows from a proposed representation of prosodic parameters in a tree where some parameters are embedded under others (see also Dresher & Kaye 1990 for a similar approach without the tree).Based on this hierarchical tree representation of prosodic parameters, a prosodic learning path is predicted first for L1 acquisition.In addition, it is assumed that all foot-related parameters, such as Foot-Type, Iterativity, etc., are initially open in L1 acquisition (i.e.not pre-set to a certain value) and can stay as such.Once a parameter is activated (i.e.set to one value or another) in an L1, though, it should be impossible to deactivate it (i.e.make it open again) in an L2 where this parameter is not relevant.On the other hand, resetting parameters from one value to another (as long as it does not result in a prosodic constituent being removed from the grammar) is predicted to be possible, given that positive evidence is available for both directions (e.g.Yes to No and No to Yes).That is, though deactivation is not possible, resetting is, on the PAPH, forming, respectively, the first and second main components of the proposal.Not all types of resetting, though, are hypothesized to be equally easy: certain parameters (and parameter values) are expected to be more difficult to reset than others, on grounds of economy, markedness, and robustness of input, which, as we will see later, is, for the most part, reflected by their location on the tree of parameters proposed in this paper. All things considered, the PAPH is an attempt at proposing a restrictive and falsifiable account of L2 prosody: not only does it predict certain things to happen, but it also predicts certain things not to happen, even when they are, in principle, allowed by UG.Certain phenomena are predicted not to occur on the current proposal, even though they would not necessarily violate principles of UG.For example, as we shall see, certain learning paths (and interlanguages attained through such paths) are predicted not to occur. The remainder of this section is organized in the following way: Section 2.3.1 outlines the hierarchical organization that is hypothesized to hold among prosodic parameters.Section 2.3.2 outlines the PAPH, and its two main building blocks (justifications for these are later provided in Section 3). Prosodic parameters in a tree Before discussing the two components of the PAPH, we consider what prosodic parameters look like under this approach. The tree diagram in ( 23) below provides a near complete set of prosodic parameters that capture word-level stress in the world's languages.All of these parameters have been proposed independently in previous research as mentioned in Section 2 above; the proposal in (23) departs from this literature in that it embeds these in a tree (see also Dresher & Kaye 1990 for a similar approach to relationships between parameters).As we will see later, this will be relevant in that it will help determine what is hypothesized to be a more vs.less difficult learning direction, on the current approach, for L2 learners.If Footed is set to Yes in (23) above, speakers of footed languages such as English will have most (or all) of the parameters that are associated with having a foot set to one value or another (either Yes vs.No or Left vs. Right).For speakers of languages like Turkish, on the other hand, since their language has no foot structure, parameters related to footing are all "open," as with child L1 learners of all languages (including English), for whom these parameters are initially open, and are waiting to be set (see also Section 3.3.2.1 below). It is presumed here that first language acquisition follows the path demonstrated in (23).Once the Foot is projected, the parameters in (23) for which there is positive evidence are set to their correct values, from top to bottom.Only after a Yes setting of a parameter can the parameters embedded under it be set to one value or another; they will, otherwise, stay open.If Boundedness is set to No, for example, the parser does not look further down, and the parameters below it stay open.If, on the other hand, it is set to Yes, the parameters below it may also be activated and set to their correct value based on positive evidence.Similarly, End-Rule can be set (to one value or another) only after Iterativity is set to Yes.If Iterativity is set to No, the L1 learner will not entertain a setting for End-Rule.In fact, as Dresher & Kaye (1990) point out, the positive cue for End-Rule, i.e. main stress to the left or right of a secondary stress will not be available in languages with non-iterative footing (see e.g.Fikkert 1994;1998;Kehoe 1998 for evidence for such a developmental path from L1 acquisition in Dutch and English respectively). The assumption behind this learning principle is that the parser is deterministic (Marcus 1980;Berwick 1985;Dresher & Kaye 1990).A deterministic parser cannot undo previously created structures (or substructures) if the parse does not work.It is local and is data-driven, which makes acquisition easier (Berwick & Weinberg 1984;Dresher & Kaye 1990).That it is local means, in the current case, that all the parameters in ( 23) will be handled one by one, and eventually set to their correct values on the basis of positive evidence, and this will be done following a path from top to bottom, treating errors also locally.For example, if the learner has not yet set Iterativity to its correct value (i.e. if it is still open), and the parser is faced with structures that the current grammar cannot capture, e.g. the presence of words with secondary stresses, it will only attempt to change the value of Iterativity (from open/none to Yes), and will not backtrack and attempt to change, for example, the value of Headedness or Weight-Sensitivity, if Headedness and Weight-Sensitivity have already been set to their corresponding values.As suggested by Dresher & Kaye (1990), a nondeterministic parser with an unlimited backtracking capacity would not only undo incorrect structures but also correct ones, which would lead to problems in acquisition, and particularly in learning prosody given that certain output cues can be interpreted by the child as triggers for different and completely unrelated prosodic parameters (Dresher & Kaye 1990).For example, a word composed of a sequence of light and heavy syllables and is stressed on its final syllable, as with LH , can either be a an iamb or a weight-sensitive trochee.Note that, all this means, for L1 acquisition at least, that there will not be any mis-set parameters (see Synder 2007 and Snyder & Lillo-Martin 2011 for similar approaches from the syntactic acquisition literature; see Hyams 1983; 1986 for an alternative account where a child could temporarily choose a nonadult parameter setting).Instead, as was argued for related parameters in the syntactic acquisition literature, the parameters in the tree in (23) are made available following a maturational schedule (Gibson & Wexler 1994), where the amount of input the learner has received influences maturation (Bertolo 1995). Prosodic acquisition path The main theoretical assumptions behind the PAPH are summarized in (24) below, followed by a summary of the two main components of the PAPH in (25).All of these assumptions will later be supported with both formal and acquisitional evidence later in Section 3. (24) Main theoretical assumptions: a.All prosodic parameters are initially open in L1 acquisition, and are, then, set to the correct value based on positive evidence (i.e.all the parameters in ( 23)).b.For some of the parameters that are initially open, i.e. the Yes/No parameters in ( 23), markedness can be invoked, since, for certain settings of these parameters (the Yes setting), the positive evidence available is more robust (and unambiguous) than it is for the other setting.For others, i.e. the Left/ Right parameters in ( 23), both values are equally unmarked, and the positive evidence available is equally robust. The predictions of the PAPH, given these assumptions and given ( 23), are summarized in (25) below: (25) The two main components of the PAPH: a.It is impossible to deactivate a parameter altogether.Thus, L2 learners will not be able to deactivate the parameters in ( 23). b.Parameter resetting is possible when positive evidence is available. i. L2 learners will have an easier time resetting terminal parameters than parameters which have other parameters dependent on them.Though resetting a parameter with embedding is costly, it is still not impossible.ii.For all parameters with Yes/No settings, whether embedded or not, L2 learners will have an easier time moving from the No value of a parameter to the Yes value than vice versa.iii.For parameters with Left/Right values, learners will have equal difficulties with going from Left to Right and Right to Left. These two general predictions form the two main tenets of the PAPH.Although the acquisitional scenario tested here is English-speaking learners of Turkish only, I assume these two general components of the PAPH to hold true for any language combination to be learnt.Section 3 below will present formal and empirical evidence for the assumptions made here in proposing the Prosodic Acquisition Path Hypothesis, both from L1 acquisition literature and from the nature of the input. Acquisitional and formal evidence for the PAPH This section presents acquisitional and formal evidence for the two components of the PAPH presented in (25) above.It then ends with an outline of the predictions of the PAPH for the learning scenario to be tested in this study. Impossibility of deactivating parameters The status of the relationship between open vs. already-activated parameters has not, to my knowledge, been discussed in the L2 acquisition literature.There is, however, good reason, based on findings of related research, to assume that once a parameter is set in an L1, it should be impossible to deactivate it (make it dormant).Given the extensive evidence for the proposal that L2 learners start with the L1 settings of all parameters (White 1985;Schwartz & Sprouse 1994;1996) and that moving from a marked value to an unmarked one is very difficult (see e.g.White 1987;1989c;2003a), if deactivation of parameters were possible, L2 learners should be able to deactivate all parameters, regardless of the availability of positive evidence, and should, thus, be able to start from scratch, like children, for this would be the most effective way of acquiring a second language, as would be predicted by the Full Access (without Transfer) Hypothesis (e.g.Flynn & Martohardjono 1994;Flynn 1996;Epstein et al. 1996).There would, then, be no formal differences between learning a marked vs. unmarked value of a parameter since all learners would have the same starting point, or would at least be able to switch, in a reasonable time period, to the open value of a parameter without much difficulty.Consequently, regardless of the L1 background, the end state of L2 acquisition would be the same target-like grammar.That is, if deactivation of parameters were possible, L2 acquisition would be no different from L1 acquisition. Since we do not observe this, as demonstrated by previous experimental research, I will hold to the position that it is impossible to deactivate parameters that are already set to one value or another.It should, therefore, be easier to move from an L1 which has not yet set a parameter to one that has already done so rather than vice versa.In the present case, then, L1 English learners of L2 Turkish will not be able to deactivate the parameters under Footed=Yes in ( 23) that are irrelevant for Turkish regular stress/prominence.For example, they will not be able to get rid of the concept of Headedness (foot type) or Boundedness in learning Turkish, although they can, I hypothesize, reset the former from Left to Right, and the latter from Yes to No.6 That is, resetting is possible, whereas deactivation is not (more on the issue of resetting in the next section). There can, however, be 'de facto deactivation', for some parameters in ( 23) are dependent on others: a parameter can be de facto deactivated when the parameter it is dependent on is reset from Yes to No.For example, Iterativity is reset from Yes to No, this means, automatically, that End-Rule is no longer relevant, i.e. is de facto deactivated.Similarly, when Boundedness is reset from Yes to No, Binarity and Iterativity (and any parameter under them) will be de facto deactivated.Resetting the Footed parameter at the very top in ( 23) from Yes to No would also result in de facto deactivation of all parameters under Footed=Yes in ( 23), although this will be very difficult (see below), for it requires the de facto deactivation of all foot-related parameters (a big change in the grammar), and, in addition, results in the loss of a prosodic constituent, unlike the parameters underneath. 7otice at this point that this predicts certain things not to happen; there should, for example, be no interlanguage grammar where a learner deactivates End-Rule, but keeps Iterativity or Boundedness set to Yes.This is despite the fact that iterative systems with no End-Rule are attested, e.g.Tübatülabal (e.g.Voegelin 1935;Hayes 1981;Prince 1983).All stresses in a given Tübatülabal word are equally strong; thus, there seems to be no main stress (Kager 1993).In other words, the End-Rule parameter, in Tübatülabal, is open, as opposed to in languages such as English.The only way for English-speaking learners to avoid rendering one foot stronger than the other is, then, through resetting Iterativity or Boundedness to No (i.e. through de facto deactivation of End-Rule, rather than true deactivation). In sum, the PAPH predicts, one cannot deactivate a parameter altogether, though de facto deactivation, in the form of resetting a parameter with embeddings from Yes to No, thereby rendering the parameters underneath irrelevant, is possible. Greater difficulty with resetting parameters with embeddings In the absence of being able to deactivate parameters (see 3.1), how does the L1 English learner of L2 Turkish (or any other footless language) proceed through the acquisition process?The PAPH predicts that although such learners may have difficulties attaining the target end state grammar of Turkish, they should still be able to resort to a variety of UG-constrained options that result in their interlanguage sounding, on the surface, more and more target-like by means of changing the parameter values in (23).In doing so, I predict that they will initially change the settings of terminal parameters, rather than those with embeddings, for terminal parameters have no parameters dependent on them, and so their being reset does not require other parameters to be (de facto) deactivated.This, in turn, leads to a smaller change in the grammar (thereby making it a more economicaland local -decision).Therefore, resetting Iterativity (or Boundedness) from Yes to No will likely not be the first option L2 learners will consider, for this will render the End-Rule parameter that is dependent on Iterativity (de facto) inactive, and that, though possible, should be costly, as it involves a big change in the grammar, a change that affects the destiny of multiple parameters.In this regard, resetting Boundedness from Yes to No should be the most difficult option (excluding expunging the Foot altogether), for Boundedness is the parameter in ( 23) with the greatest number of parameters dependent on it, other than Footedness (which has the most and its resetting would in addition result in expunging a prosodic constituent).Options such as switching Extrametricality from Yes to No should be relatively easy, on the other hand, for it is a terminal parameter with no other parameters dependent on it.This also means that, for parameters with embedding, Yes values will be more difficult to reset to No than vice versa, for it is the Yes values in ( 23) that have other parameters dependent on them. addition that we have a principle, as is argued by Dresher (to appear), entailing that learners make only as many contrasts as the data require, we reach the same conclusion, i.e. that weight-insensitivity is the default/more unmarked, as is argued here. As an additional example, consider Boundedness: every bounded foot is maximally binary.There is, for example, no way to misanalyze a hypothetical word like pátakàtalàta as unbounded; it would likely be footed as (páta)(kàta)(làta), which gives robust evidence that the foot is maximally binary (i.e.bounded).A word like pátakatalata, on the other hand, could be analyzed as bounded or as unbounded: (i.e.(páta)katalata or (pátakatalata) respectively), meaning that the evidence for the No setting of Boundedness is not as robust.In fact, if Bounded-Yes were the unmarked option provided by UG, there would likely be no unbounded languages in the world (except those with weight-sensitivity). For the other Yes/No parameters, i.e.Extrametricality and Iterativity, I follow Fikkert (1995) in assuming that No is the unmarked setting for both.Dresher & Kaye (1990) similarly assume that No is the unmarked value for Extrametricality, but differ in their claim that the Yes setting is unmarked for Iterativity, arguing that there is positive evidence for the No setting of this parameter in the form of the absence of secondary stress.As Fikkert (1995) points out, however, one could also argue that the 'presence' of secondary stress is a positive cue for the Yes value of this parameter and, therefore, assume the unmarked value to be No.She also presents evidence for this from Dutch child language acquisition data/order: Although stress assignment in Dutch is iterative and weight-sensitive (e.g.Trommelen & Zonneveld 1989;1990), child learners of Dutch go through a stage where they have only one left-headed foot per word at the right edge of a prosodic word (Fikkert 1995;1998); that is, even though Dutch is weight-sensitive and iterative, child Dutch is not.Fikkert distinguishes four stages in the acquisition of stress in Dutch, and shows that Iterativity is only learnt at Stage 3 (around age 2;2). In addition, all things being equal, it should always be easier to notice the presence of something rather than its absence (see Markus 1993 for L1 acquisition); noticing the absence requires access to more forms to be sure that the relevant property is indeed absent.Take, for example, the following situation: In a trochaic weight-sensitive language, trisyllabic words can have secondary stress in words with weight profiles of e.g.HHL, HLL, etc.: (H ) (H )L and (H )(ĹL) respectively.That is, seeing words with surface stress profiles of H HĹ and H ĹL is sufficient for a learner to activate the Yes setting of Iterativity.The converse, however, is not true.If the same learner came across words with stress profiles of HH Ĺ and HĹL, i.e. words without secondary stress, this does not necessarily mean that Iterativity in the language being learnt is set to No; this could alternatively be due to the possibility that the relevant language is not weight-sensitive (and thus that H is not bimoraic).Alternatively, it could also be due to (leftmost) Extrametricality, if such languages exist (see Kager 1989).Yet another possibility is that there is destressing in clash, resulting in one single stress in forms like HH Ĺ and HĹL.Such a learner will clearly need to come across longer words or more word types in order to activate the No setting of Iterativity.In sum, moving from the Yes setting of Iterativity to the No setting would clearly require more evidence than vice versa, another reason why the No, instead of Yes, setting must be the default. To summarize so far, there are two different factors leading to the Yes to No direction being more difficult than the No to Yes direction.The effects of the two factors can be disentangled, for we expect much more difficulty in the case of parameters with embedding being reset from Yes to No than terminal parameters.On the other hand, there should be no difficulties for the No to Yes direction; this is, after all, a movement towards a marked setting and does not result in the (costly) de facto deactivation of any parameters, for no parameters are embedded under the No values. Left to Right vs. Right to Left are equally easy Finally, for parameters in ( 23) whose values express directions, e.g.left-headed vs. rightheaded feet or Left-to-Right vs. Right-to-Left footing, both values are predicted to be equally easy to reset to.These, I argue, are equally unmarked (see below). 10rom the perspective of robustness of the input, the evidence for either value is equally robust for these parameters, unlike Yes/No parameters.For example, the evidence for whether a foot is left-headed or right-headed is equally robust; the only thing that differs between left-headed feet vs. right-headed feet is the location of the stressed syllable, e.g.(σ́ σ) vs. (σ σ).Similarly, the difference between a word tree that is strong at the left edge (End-Rule-Left) vs. one that is strong at the right edge (End-Rule-Right) is the location of the word edge where the most prominent stressed syllable occurs, e.g.[(σ́ σ)(σ̀ σ)] vs. [(σ̀ σ)(σ́ σ)] for a trochaic language or [(σ σ)(σ σ)] vs. [(σ σ)(σ σ)] for an iambic language. Likewise, the only difference between left-to-right and right-to-left footing (Directionality) is whether an initial/final or peninitial/penultimate syllable is stressed in words with an odd number of syllables, e.g.[(σ́ σ) σ] v.s.[σ (σ́ σ)] for a left-to-right vs. right-to-left trochaic language and [(σ σ) σ] vs. [σ (σ σ)] for a left-to-right vs. right-to-left iambic language.Although it might be argued here that there is better evidence for the left-to-right direction for trochees and right-to-left for iambs, in the form of adjacent syllables that are unstressed, the balance is changed in favor of the opposite direction in languages that allow degenerate feet, in the form of adjacent syllables that are stressed, e.g.[(σ́ σ)(σ)] vs. [(σ)(σ́ σ)] for a left-to-right vs. right-to-left trochaic language and [(σ σ)(σ)] vs. [(σ)(σ σ)] for a left-to-right vs. right-to-left iambic language. In sum, both directions in all Left/Right parameters seem to involve equally robust evidence.There should, therefore, be no difference in level of difficulty between moving from one setting to the other for any of the prosodic parameters in ( 23) whose values express directions. Evidence from L1 acquisition Most of the evidence cited above for the equally unmarked status associated with both values of Left/Right parameters and the more marked status of the Yes setting of Yes/No parameters came from formal assumptions made about robustness of the input.There is, in addition, some evidence for these assumptions from the findings in the L1 acquisition literature. Although not all parameters have been studied in L1 acquisition research, one parameter, Foot-Type, a Left/Right parameter, has particularly been investigated.Some of the findings of this line of research are, in addition, informative of the assumptions made here about the other parameters, including Yes/No parameters.Section 3.2.4.1 below provides an overview of the findings of L1 acquisition research on the Foot-Type parameter and discusses its implications for the current proposal.Section 3.2.4.2, then, overviews the implications of some of the findings of this and similar research on the other parameters discussed above. Against the Trochaic Bias Hypothesis Early research on the L1 acquisition of stress claimed that there is an initial universal trochaic phase for all learners, including those learning iambic languages (Allen & Hawkins 1978;1980).This suggests that Trochaic is the default setting provided by UG for the Foot-Type parameter, contra the arguments made above for the equally unmarked status of trochaic and iambic grammars, as well as other parameters that express directionality, whose default value, I have assumed, is open (i.e.not initially set to one value or the other). If the current proposal is correct, then the so-called Trochaic Bias Hypothesis should be incorrect; that is, there should not be an initial trochaic phase for all learners, i.e. for those learning iambic languages.Rather, learners of trochaic languages should start with a trochaic (left-headed) foot (once they get enough positive evidence), and learners of iambic languages should start with an iambic (right-headed) foot.In both cases, the parameter should be set to the correct value from the beginning on the basis of positive evidence, since it is not a parameter with values in a subset/superset relationship, nor is it a Yes/ No parameter, but is rather a Left/Right one, and thus, neither markedness nor the availability of positive evidence will predict one setting to be earlier than the other. This prediction seems to hold true, for there has been virtually no evidence so far for a "Trochaic Bias" from L1 acquisition research.In fact, the hypothesis was offered based on the behavior of English-learning children (Allen & Hawkins 1978;1980), and virtually all evidence for it comes from learners of trochaic languages, e.g.English (Gerken 1991;1994;Kehoe 1998), Dutch (Fikkert 1994;Wijnen, Krikhaar & den Os 1994), and Greek (Kappa 2000).That is, as will be demonstrated below, it is only children learning trochaic languages who seem to choose trochees from the onset of acquisition, and not those learning iambic languages.To my knowledge, the only exception to this so far has been Hebrew-learning children, who were argued to have a trochaic bias (Adam & Bat-El 2008) despite acquiring an iambic language (Bat-El 1993); however, recent analyses of Hebrew stress have proposed that the language may actually be trochaic (see Becker 2003).All things considered, then, there seems to be little evidence for a Trochaic Bias. In fact, there is some evidence against it from both trochaic and iambic languages: Hochberg (1988), for example, demonstrates that children have a "neutral start" in acquiring Spanish stress, meaning that, at the beginning, they produce many iambic, as well as trochaic, profiled words; that is, they do not have a bias towards trochaic stress.This is despite the fact that Spanish is analyzed as trochaic by most researchers (see e.g.Roca 1988;1991;Harris 1991;1992).Similarly, according to Prieto's (2006) study, children have a neutral start in learning Catalan, a language that is usually analyzed as trochaic (Serra 1996;Bonet & Lloret 1998), but Catalan stress is contrastive, and is, thus, also compatible with an iambic analysis (Wheeler 2004). The so-called bias that is demonstrated for English-and Dutch-learning children should, then, come from the rhythmic properties of the input children receive in learning these languages.Once they are subject to enough input from the target trochaic language, they set the value of the Foot-Type parameter, accordingly, to Trochaic.If so, even for English-learning children, at very early stages, there may be no preference for trochaic feet.This prediction seems to hold true: Vihman, DePaolis & Davis (1998) demonstrated, for English-learning children, that during the babbling stage, they produce an equal number of trochaic and iambic patterns in their bisyllabic utterances (see also Klein 1984 for similar results). Conversely, if there is no "Trochaic Bias," children learning languages that are not trochaic should not start with trochaic utterances.Children learning iambic languages should, for example, favor iambs to trochees.This prediction, too, is borne out, though most of the evidence comes from the acquisition of French (e.g.Paradis, Petitclerc & Genesee 1997;Vihman et al. 1998;Archibald & Carson 2000;Goad & Buckley 2006;Rose & Champdoizeau 2007;Goad & Prévost 2011) and Turkish (Aksu-Koç & Slobin 1985), two languages that probably have no foot structure (Özçelik to appear).So lack of a trochaic bias in learners of these languages might also be attributed to these languages being footless.There are, though, two other studies from languages that have been argued to be iambic in the literature: Yucatec Mayan (Archibald 1996) and Northern East Cree (Swain 2008).Learners of these languages produced utterances that were consistent with an iambic analysis.Out of these, the latter, Northern East Cree, seems to be truly an iambic language (i.e.probably not footless), as it demonstrates such properties as boundedness and quantity-sensitivity (see e.g.Dyck, Brittain & MacKenzie 2006;Wood 2006), as with other languages from the Algonquian family, e.g.Ojibwa (Bloomfield 1957;Piggott 1980;1983).The point would, of course, have been clearer if there had been more studies with learners of indisputably iambic languages.However, even the results of studies with Spanish-and Catalan-learning children, as well as learners of English at the babbling stage, should suffice to be evidence against the default status of trochaic stress, as they demonstrate that children do not necessarily start with trochees even when learning a trochaic language. More from L1 acquisition on default vs. open settings Little research has been done on the L1 acquisition of other Left/Right parameters.Fikkert (1994), however, demonstrates that when children learning Dutch start producing words composed of more than one foot, they produce equal stress on the head of each foot.I interpret this observation as lack of a default option for the End-Rule-Left/Right parameter; the parameter had probably not yet been set to either value, and was still open, as was argued here to hold for the initial setting of all parameters that result from having foot structure. I expect all Left/Right parameters in (23) to behave in the same way.As argued above, there is no principled reason why, for these parameters, UG should make one value the default option, given that positive evidence is equally available in both directions.In fact, having a default value for these parameters would not only not help an L1 learner in the acquisition process, it would, rather, serve to increase the burden on the learner since a previously assumed (default) analysis would, then, need to be altered based on positive evidence. Unfortunately, Yes/No parameters have not received the same attention in L1 acquisition research as Left/Right parameters have.Most of the arguments given above for the marked status of the Yes setting have come, therefore, from the formal assumptions about the comparative robustness of input.There is, however, some evidence in support of these arguments that indirectly comes from the literature on the Trochaic Bias Hypothesis. First, virtually none of the learners tested in these studies showed Extrametricality, even when learning a language with extrametrical final syllables.Kehoe (1998), for example, found that target English nouns of the shape (H Ĺ)<H> were often produced with final stress by children, which suggests that the Yes value of this parameter had not yet been set.This is evidence that the Yes setting is more marked, since errors, in L1 acquisition, are usually made only by children learning a language with the marked value (see e.g.Fikkert 1994).Therefore, errors should take the form of the unmarked or the open value of a parameter (though, here, no activation and the unmarked No value, on the surface, yield the same outputs). Second, one type of evidence that is often cited as support for the Trochaic Bias Hypothesis is that target LH ́ forms such as balloon which have final stress are often produced as ĹH (with initial stress) by learners of both English and Dutch (see e.g.Fikkert 1994; Kehoe 1998).The problem with taking these facts as evidence for a trochaic bias is that the change from final to initial stress in such words does not turn them into trochaic (they already are trochaic), so the behavior cannot have been caused by a preference for trochaic feet.Rather, the change potentially signifies that children have a preference for weight-insensitive grammars, which is predicted by the PAPH, since Weight-Sensitivity is a Yes/No parameter, unlike Foot-Type, and weight-sensitive grammars have the marked setting of this parameter.Weight-sensitive systems, therefore, should arise later only on the basis of positive evidence. In sum, as both the formal arguments about the nature of the input and the findings of L1 acquisition literature indicate, there is no reason, for Left-Right parameters, to have a value that is more unmarked than the other, whereas there is good reason to make such an assumption for Yes/No parameters.In either case, all parameters that follow from having a foot are initially open.Whereas open means, as far as L1 acquisition is concerned, a neutral start for Left/Right parameters (one that favors neither Left nor Right), it is empirically equivalent to the No setting for (most) Yes/No parameters. Predictions Given the linguistic properties of English and Turkish (see Section 2.1.and 2.2.), and given the Prosodic Acquisition Path Hypothesis (PAPH) proposed in Section 2.3., for which we have presented evidence in this section, certain predictions follow for L1 English-speaking learners of L2 Turkish.Before moving on to the predictions, a summary of the representations for the two languages that are under investigation here is provided below. L1 representations of the two languages The trees in ( 26) and ( 27), below, provide a schematic representation of the parameter settings for each of the two languages that are under focus in this paper.The specific options chosen by a given grammar are provided in boldface; for example, in (26) below, the option Yes is bolded under Footed, for this is the option taken by English, a language with foot structure. We start with English.As was explained in Section 2, feet in English are weight-sensitive, binary and left-headed (moraic trochees).Foot construction is right-to-left, but the rightmost syllable is extrametrical; it is, therefore, ignored for the purposes of foot construction, and Extrametricality is, thus, set to Yes.Feet are iterative in English; as long as a word is long enough, there will be multiple binary feet, out of which the rightmost will be the head of the PWd, since End-Rule is set to Right, and the head of this foot will, thus, bear primary stress. (26) English stress Turkish, on the other hand, does not require words to have feet (Özçelik 2014; to appear).As discussed in Section 2, Turkish only has demarcative cues to the edges of words, rather than "stress."As such, I analyze Turkish as footless, as represented in (27) (see Section 2.2 for a more detailed analysis).And since Footed=No in Turkish, all other parameters under Footed=Yes (see (26) above) are open. (27)Turkish stress Footed Yes No Hypotheses The universal predictions of the PAPH, predictions that are expected to hold true for all L1-L2 pairings, were presented in Section 2.3 above (see the summary in ( 25)), along with an in-depth examination of the proposal and formal and empirical evidence to support it. The current section covers more specific predictions, in the context of L1 English learners of L2 Turkish, the two population tested in this paper. The acquisition task for the learners of Turkish will be difficult, as this involves resetting a top-level parameter in the tree, one whose being reset has consequences for all parameters underneath (one with the most embeddings), as well as resulting in ridding the grammar of a prosodic constituent, the Foot.Thus, the Foot will perhaps never be expunged from the grammar.Note that this prediction of the PAPH does not align with a pre-theoretical 'common sense' view that acquisition of Turkish stress should be easy, since it's very simple to state (especially in comparison to languages like English): stress the word-final syllable.Further, note that given the Subset Principle, too, expunging the Foot should be difficult or perhaps impossible, since there is a subset-superset relationship between languages that do not have feet (subset) and those that have it (superset), as the null set (footless) is the subset of every set.However, it should also be noted that this is different from a classical subset-superset problem in L2 acquisition (see White 1989a;b) in that positive evidence is available in both directions in the form of phonetic cues, since intensity and/or duration are correlates of foot structure, and these are always absent from a footless representation (see Section 2.2.above).Traditionally, moving from a superset to a subset value is considered to be difficult/impossible precisely because there is no positive evidence in such cases to indicate to the learner that the extra option allowed in the L1 is disallowed in the L2 (White 1989a;Slabakova 2006). Nevertheless, based on the input, the learners will restructure their grammar by going through a number of stages, first resetting parameters that are expected to be easier to reset, i.e. terminal parameters, and only in later stages, resetting parameters with embeddings.This will make their interlanguage sound, on the surface, more and more target-like (though not necessarily structurally target-like). That is, the hypothesis that English-speaking learners of Turkish will perhaps not be able to expunge the Foot from their grammar and thus not be able to reach target-like representations of Turkish regular stress does not mean that they will always use their L1 representations: As per (25b), they are expected to consider a variety of UG-constrained options, and in doing so, they should initially resort to easier options rather than more difficult ones (see (25b.i) through (25b.iii)).For example, in learning Turkish regular stress, English-speaking learners should first reset a terminal parameter like Extrametricality or Headedness, rather than a parameter with embedding like Iterativity. Methodology In order to test the PAPH, a production experiment was designed with English-speaking learners of Turkish.The section below presents information on the subjects (Section 4.1), materials and stimuli (Section 4.2), procedures (Section 4.3), as well as data analysis ( Section 4.4). Subjects A total of 46 English-speaking learners of Turkish participated in the experiments.According to self-report, all subjects were near-monolingual and had normal or corrected-tonormal vision and had no hearing problems.In order to independently determine their proficiency level, two proficiency tests were used, a cloze test measuring their syntactic and morphological skills and a read-aloud test assessing their global phonological proficiency.The readings obtained as a result of the read-aloud test were, then, rated, on a scale of 1 to 7, by three native speakers of standard Turkish, 1 being least, and 7 being most native-like in terms of global accent (see e.g.Akita 2006; 2007 for a similar procedure with the read-aloud task).Also included among the L2 learners' readings were readings by 3 Turkish native speakers, as well as readings by 3 learners of Turkish who were not subjects in the current study, totaling 50 readings to be rated (44 learners + 3 native speakers + 3 non-native non-subjects).The ratings were done in a pseudo-random order.The inclusion of native speaker readings and non-native non-subjects was done in order to help define the upper and lower bounds, thereby leading to more accurate ratings (see White & Genessee 1996), as well as to control the potential confounding effects of just having started the rating process. By adding together the subjects' scores on the two proficiency tests, the cloze test and the read-aloud task, overall proficiency scores were calculated, and this was done in percentage terms.In doing so, each of the two tests contributed equally.Based on these overall proficiency scores, the subjects were assigned into different proficiency groups, beginner (n=14), intermediate (n=21) and advanced (n=11). Finally, regarding their background, the subjects ranged in age from 17 to 41 years old at the time of testing.All of them started learning the target language after age 16, with most starting in college around age 21.All of them had at least 1 to 2 years of formal instruction in Turkish, with the exception of two beginners and one intermediate learner who had no formal instruction (other than self-study) in the target language.All of the subjects had some regular naturalistic input in the target language.Most had nativespeaking partners or friends (or roommates) with whom they communicated regularly in Turkish.None of the subjects was a heritage learner of the target language. Materials and stimuli The stimuli were composed of a total of 70 words, all of which were concrete nouns that could be depicted via a picture and could be known even by low proficiency learners.These included 20 bisyllabic and 40 trisyllabic words, all of which were controlled for the weight profiles of each syllable they contained.In addition, 5 four-syllable and 5 five-syllable words were also used (not controlled for weight), although these were not included in the analysis in the end, first because these were found to be abstract or recently borrowed words (reflective of the general situation with longer uninflected nouns in Turkish) and because many learners produced them with an internal pause (due to their length), which would have confounded the results. The 60 bisyllabic and trisyllabic words embodied all possible combinations of open vs. closed syllables, i.e.LL, LH, HL, and HH for bisyllabic words, and LLL, LLH, LHL, HLL, HHL, HLH, LHH, and HHH for trisyllabic words, leading to 12 different conditions, each of which had 5 words. 11Examples of bisyllabic and trisyllabic stimuli are presented below in Tables 1 and 2 respectively.For each stimulus, the phonetic transcription is given in the first row, followed by their written form in Turkish and English gloss. Most light syllables started with an onset consonant followed by a nucleus vowel, whereas some were onsetless in that they only contained a nucleus.That is, onset profiles were not controlled, for onsets are not usually considered to contribute weight to a syllable (see e.g.Hayes 1995).As for 'heavy' syllables (heavy from the perspective of English), all of these contained a single nucleus vowel followed by a coda consonant.Although Turkish has a number of words with long vowels, since there are very few of them (all of which are borrowed words), syllables with long vowels were not included in the stimuli.Further, including heavy syllables that are composed of a nucleus and a coda consonant were considered to be sufficient for the purposes of the current study, since, English (the L1) is weight-sensitive regarding both codas and long vowels (and Turkish is for neither). Coda + onset sequences were also controlled for their sonority profiles, as this could impact the way a sequence of two syllables is syllabified thereby altering the weight profiles of the relevant syllables.As such, all coda plus onset sequences in the stimuli were either sonorant + obstruent, sonorant + sonorant, or obstruent + obstruent.Crucially, obstruent + sonorant sequences were not employed, as these would be syllabified as complex onsets in English, even though they are coda + onset sequences in Turkish, since Turkish has no complex onsets (Kornfilt 1997; Kabak 2011), and transfer of L1 syllabification strategies to the target language could confound assumptions about syllable weight profiles.To give an example, if a word-medial coda + onset sequence in a Turkish word such as as.lan "lion" (an obstruent + sonorant sequence) were analyzed as a complex onset by the subjects, and were accordingly syllabified as a.slan, the initial heavy syllable would turn into light, changing an HH word into LH, which can then impact location of stress/prominence. Experimental procedure Each subject was tested individually in the following order: First, they were asked to fill in the background questionnaire.Then, they took the production experiment, followed by the read-aloud task, both of which were completed on computer in a sound-attenuated 11 Closed syllables in Turkish are not heavy, but I simply use 'H' vs. 'L' to denote the difference, which reflects English representation of closed vs. open syllables, as closed syllables, along with syllables that contain long vowels, are heavy in English, and thus, attract stress (see Section 2.1 above).booth.Finally, the cloze test followed, which, as with the background questionnaire, was taken in a paper-and-pencil format. In order to control for a possible order effect, the experimental stimuli were pseudorandomized.In addition, half of the subjects completed the experiments with one order (1 to 70), and the other half with the reverted order, i.e. 70 to 1, ensuring randomization between subjects.Each of the stimuli was first uttered in isolation, and then, once again, in a carrier sentence.The following carrier sentence was used: (28) Bu resim-de X var this picture-LOC X exist(ent) "There is X in this picture." For the purposes of data analysis, only the words contained in the carrier sentence were transcribed and later analyzed (see below). Before the subjects started the production experiment, they were presented with a practice session composed of two words in order to ensure that they fully understood the task.The experimenter, who was not made aware of the purpose of the task, made sure not to utter the target words during the practice session in an effort to avoid influencing subjects' pronunciation.The practice session was followed by two words that were not included in the data analysis, as the purpose of including these was to avoid possible effects of having just started the experiment. For all the experimental stimuli, each subject first saw a picture of the target word on the computer screen (as a Powerpoint slide), which was accompanied with the first letter of the word (see Figure 1).At this stage, their task was to name the object depicted in the picture.The reason for providing the first letter of the object was to help them retrieve the target word from the memory, especially in cases where there could be synonyms or where the picture can be interpreted as two different things.In the example below, for example, the presence of the first letter serves to help them identify the object as portakal 'orange', instead of, for example, mandalina 'tangerine': At the next step, the subjects saw the picture again (see Figure 2), but this time, with the carrier phrase, and their task was to utter the word within the carrier phrase. Once they produced the word within the frame sentence, the subjects, then, moved on to the next experimental item by clicking on the slide.The particular frame sentence was chosen based on several reasons.First, it is simple and rather short, and as such, there isn't much processing load involved on the part of the subjects, thereby decreasing chances of producing utterances with word-or sentence-internal pauses.Second, and perhaps even more importantly, the target stimulus within the frame sentence is located Data analysis As mentioned above, only words uttered in a carrier sentence were transcribed and later analyzed.This was done to avoid certain confounding variables that could arise from words being produced in isolation, such as utterance-final lengthening and pitch fall, which could, in turn, signal greater final prominence than a given subject might produce.Further, the fact that these words were first produced in isolation before being produced in a carrier sentence helped remove any effects that could be caused by 'guessing'; words that are guessed are uttered with an intonation pattern that is usually associated with questions, and are, therefore, accordingly produced with a rising pitch and are lengthened on their final syllables. The words produced in the carrier sentence were analyzed using an auditory measure for stress/prominence placement by two trained Turkish-speaking phonologists with background in Turkish phonology and teaching Turkish language.In cases of disagreement, the relevant item was discarded.The two listeners also used the Praat acoustic analysis software (Boersma & Weenink 2011) to back up the auditory data in determining prominence location, referring to vowel and syllable duration (in ms), average and peak intensity (in dB), average fundamental frequency (F0, in Hz), and time of F0 peak. Sentences in which the target word was produced with a pause between its individual syllables were excluded from the analysis.Since four-and five-syllable words involved a particularly high number of pauses, especially for certain low-proficiency learners, these stimuli were completely excluded (see above), and thus, only bisyllabic and trisyllabic stimuli were analyzed. General results Table 3 below summarizes overall results, indicating which syllable, for both bisyllabic and trisyllabic stimuli, bore primary stress in the utterances of the subjects. 12The first number in each cell indicates percentage of words stressed on the corresponding syllable, while the second number (in parenthesis) is the standard deviation. 12 Overall, only 1.84% of all stimuli were excluded from the analysis for reasons such as the presence of a word-internal pause or disagreement between the two raters.As seen, the proportion of subjects' finally prominent syllables increases as they become more proficient in the target language.Whereas beginners stress word-final syllables about 23% of the time for bisyllabic and 21% of the time for trisyllabic Turkish words, these numbers rise to about 66% and 61% for the intermediates and 83% and 77% for the advanced learners.The results of a one-way ANOVA confirm that the difference between the three groups was statistically significant, for both bisyllabic (F = (2, 43) = 15.515,p < 0.001) and trisyllabic words (F = (2, 43) = 15.482,p < 0.001).Further, the results of a Tukey paired HSD demonstrate that this difference was due to the fact that both the intermediate and the advanced learner groups differed significantly from the beginner group (p < 0.001 for both pairs, for both bisyllabic and trisyllabic stimuli).The intermediate and the advanced groups did not however differ from each other significantly (p = 0.258 for bisyllabic and 0.251 for trisyllabic stimuli). Clearly, then, there was a substantial amount of individual variation in the intermediate and the advanced groups, with some intermediate learners performing similarly to the advanced learners and some advanced learners performing similarly to the intermediates. The general results as presented here are informative in the sense that they reveal the significant differences between the beginner group and the intermediate and advanced groups, but they do not, as it stands, indicate what exactly is going on in the individual grammars of these learners, especially since different learners at the same proficiency level are lumped together.As seen from the very high standard deviations, these percentages are not very informative; a 60% success level on the surface (as with the intermediates' performance on trisyllabic stimuli) can, for example, be due to two very different grammars being lumped together, one of which gives 20% final stress the other 100%.To put it another way, in order to have a clear understanding of individual learner grammars, one needs to distinguish not only between these two hypothetical cases, but also know what, in particular, is responsible for the 20% (or 100%) success rate.In other words, one needs to pinpoint whether the 20% is random or is indicative of something more subtle, such as certain stimuli shapes being stressed on final syllables by such a learner, as opposed to other shapes of stimuli. With these considerations in mind, the following scatterplot (see Figure 3), which details each subject's percentage of finally stressed bisyllabic words, is quite informative and presents interesting insight into individual learner grammars: At a first look, what this scatterplot shows is total chaos, with learners' performance all over the place, regardless of proficiency level.A more detailed look, however, suggests that, based on the percentage of finally stressed syllables they had, there were in general four types of learners: The first were learners who had almost no finally stressed syllables, as with L1 English, which we will call, for the purpose of this paper, Stage 0 learners.Interestingly, there were no learners who had about 25-40% final stress; there was a gap in the relevant area, with the next level starting with learners who had about 50% final stress (Stage 1).These were then followed by learners who had about 75% and 90-100% finally stressed words, with again not much in between. This figure suggests that something is going on in individual grammars which suddenly results in leaps in the number of finally stressed syllables, much like 'parameter resetting' would.In fact, as I will illustrate in the following section, further analysis of individual grammars demonstrates that these leaps were indicative of different 'stages', stages which were characterized by different parameter settings employed by the learners. Individual learner grammars In this section, I illustrate individual learner grammars.In doing so, I concentrate on four trisyllabic words as examples (taken from the actual test items), which will help demonstrate learners' outputs under each stage: These words include (i) /a.ra.ba/ araba 'car', (ii) /ju.mur.ta/yumurta 'egg', (iii) /por.ta.kal/ portakal "orange", and (iv) /te.be.ʃir/ tebeşir "chalk."These respectively represent sequences of LLL, LHL, HLH and LLH syllable structure profiles, based on the weight these syllables would be assigned by the English grammar (since Turkish is not weight-sensitive). 13 When individual learner grammars were analyzed, i.e. when learners were categorized not according to proficiency level (as is usually done in L2 research), but in terms of their behaviour, a stage-like pattern of development emerged. 14We describe this stage-like behaviour below, illustrating the surface effects of changes in parameter settings.We start with Stage Ø, which was composed of learners who treated Turkish as if it were English, the L1. Stage Ø / Full Transfer: Use L1 values of all parameters Assuming that the initial state of L2 acquisition is the L1 grammar (White 1989b;Schwartz & Sprouse 1996), beginning level English-speaking learners of Turkish should transfer L1 settings of all the parameters in (26), and thus, construct right to left, iterative, left-headed, 13 Note that these four word types are used in this section only for expository purposes; as was mentioned before, the experiment included all possible L and H combinations with two-and three-syllable words (i.e.totalling 12 possibilities).The point about the sections to follow is only to show, for a subset of words, what kind of effects one got with different parameter settings for various parameters. 14All but 2 of the 46 learners tested belonged clearly to one stage or another, defined based on the parameter settings employed, as will be illustrated here.bounded, binary feet with End-Rule set to Right and Extrametricality to Yes.It was found that this was indeed the case with a total of 9 subjects, all of whom were beginners.In particular, 9 of the 14 beginners tested in the experiments (see Table 3 above) belonged to this stage, with no learners from intermediate or advanced levels.Note that since, at this stage, the final syllable is always extrametrical, it never gets stressed, and the learners should never be successful 15 at stressing the final syllables of Turkish words.As is demonstrated by Figure 4, a binary moraic trochee is constructed at this stage starting from the right edge of the PWd (excluding the final syllable, as it is extrametrical), meaning that stress is assigned to the penultimate syllable if it is heavy (e.g.yu.múr.ta),otherwise to the antepenult (e.g. pór.ta.kal, á.ra.ba, té.be.ʃir).Heavy syllables can form a foot by themselves, as they are binary at the moraic level, as with [(pór).ta.<kal>] and [yu.(múr).ta](unless they are extrametrical as with the final syllable in (c) and (d)). As Table 4 below illustrates, however, due to what I assume to be performance-related factors, these learners produced final stress 6.67% (SD: 6.12) of the time for bisyllabic and 8.54% (SD: 4.37) for trisyllabic stimuli.Further, for trisyllabic utterances, they stressed the first (antepenultimate) and the second (penultimate) syllable in equal amounts, which is reflective of the fact that half of the stimuli had a heavy second syllable (see Table 2), which is stressed since English is weight-sensitive (e.g.[L(H )<L>]), and End-Rule is set to Right (e.g.[(H )(H )<H>] ).This is further evidenced by the fact that a heavy syllable, when available (and when not extrametrical), was indeed stressed.Accordingly, for trisyllabic stimuli, 94.89% of the heavy initial syllables and 98.34% of heavy penultimate syllables (i.e.those that were not extrametrical) received either primary or secondary stress. 1This is not actual success; 'success' here means correctly being able to put prominence on the final syllable and only on this syllable.Actual success would necessitate ridding the grammar of the Foot and representing Turkish prominence as intonational instead of as stress.Again, for trisyllabic stimuli, the results of a one-way ANOVA confirm that there was a statistically significant effect of syllable location (i.e.word-final, penultimate, or antepenultimate) on stress, (F = (2, 24) = 13.080,p < 0.0001).Furthermore, as would be expected by the representations employed at this stage (see Figure 4), the results of a Tukey paired HSD demonstrate that this difference was due to the fact that both the antepenult and the penult (both stressed around 50% of the time) differed significantly from the final syllable (rarely stressed) regarding stress attraction (p < 0.001 for both pairs), whereas the difference between the pair antepenult and the penult was not statistically significant (p = 0.834).As for bisyllabic utterances, the difference between the two syllables (93.33% vs. 6.67%) was similarly statistically significant, (F = (1, 16) = 901.333,p < 0.0001). Finally, regarding Iterativity, the second surface correlate of non-native-like prosody (in addition to location of main stress), as expected under the PAPH, it was also non-targetlike at this stage, for it is set to Yes in English, unlike in Turkish, although, for the words in Figure 4, it is vacuously satisfied since none of these words have enough moras to make two binary feet, as the last syllable is treated as extrametrical. Stage 1: Reset Extrametricality from 'Yes' to 'No' At the next stage were learners who reset the Extrametricality parameter from Yes to No, and kept all other English parameter values as they are in the L1, thereby correctly producing some forms with final stress, as a result of this single change in the grammar. At this stage, we would expect the learner to still construct right-to-left, weight-sensitive, binary trochees.And since the learner's trochees will be weight-sensitive, and since final syllables are no longer extrametrical, he or she will stress the final syllable of all words that end in a consonant (representations (c) and (d)), for both codas (and long vowels) are moraic in the L1, employing the following representations: For final prominence, this would lead to a 50% "success" rate, given that half of the stimuli ended in closed syllables (see Tables 1 and 2).This was partially confirmed, as Table 5 below demonstrates. The fact that final stress was not observed 50% of the time in the utterances of these subjects is due to two factors: One, three subjects at this stage were still in the process of moving from the previous stage to Stage 1, and as such, produced some words with extrametrical final syllables.Two, surprisingly, and in a way not directly predicted by the PAPH, two subjects at this stage had End-Rule set to Left (having perhaps been influenced by Turkish exceptional stress), and one had a variable End-Rule, meaning that although final closed syllables received (at least secondary) stress, they did not, in the utterances of these subjects, necessarily receive primary stress.When these five subjects were excluded from the analysis, a clearer picture of Stage 1 appeared, as is illustrated in Table 6 below, where the penult and the final syllable are stressed to an equal extent, as would be expected by the representations (see Figure 5) corresponding to this stage: As the results of a one-way ANOVA demonstrate, for trisyllabic words, the difference in stress attraction rates between the three syllables was statistically significant, (F = (2, 21) = 34.068,p < 0.0001).Further, this difference, as indicated by the results of a Tukey HSD test, was, as expected, because the proportion of both the penultimately stressed and the finally stressed words was statistically different from the proportion of words stressed on the antepenult (p < 0.001), although the penultimate and the final stress pairs were not statistically different from each other (p = 0.435).Likewise, for bisyllabic words, there was no statistically significant difference between penultimate and final stress, (F = (1, 14) = 0.004, p < 0.948), as both syllables were stressed roughly in equal amounts.Finally, the relation between syllable weight and stress is also evidenced by the fact that heavy syllables, when available, were stressed, receiving either primary or secondary stress (depending on the settings of the other parameters of the said interlanguage).Thus, for trisyllabic stimuli, 99.42% of final heavy syllables, 96.32% of penultimate heavy syllables and 95.71% of antepenultimate heavy syllables were stressed (primary or secondary) for these 8 learners. Remember at this point that a learner at this stage could possibly also reset a parameter like Iterativity from Yes to No, and attain full success in that domain, but as predicted by the PAPH, this does not happen until much later: On the PAPH, resetting Iterativity would be quite costly, given (25), since the parameters dependent on Iterativity (i.e.those below it in ( 23)) would also be affected by such a change. 16 Finally, note that this is a stage where the interlanguage grammar is neither like the L1 nor like the L2, neither in terms of the parameter settings employed nor in terms of the surface location of stressed syllables.For example, a word like /a.ra.ba/, which was stressed on its initial syllable in the previous stage (as with the L1), is stressed on its penultimate syllable at this stage (i.e./a.rá.ba/).However, the penultimate syllable is not 16 One might argue that such a learner can, instead, reset End-Rule to No, which is a terminal parameter under Iterativity, and, thus, have no Iterativity.This is not possible for two reasons.First, since End-Rule is a Left/ Right parameter, one cannot simply reset it to No; its effect cannot be removed unless it is deactivated, which is hypothesized to be impossible, given (25a).Second, even if deactivation of End-Rule turned out to be possible, since there are iterative systems with no End-Rule, e.g.Tübatülabal (e.g.Hayes 1981;Prince 1983), even if End-Rule was, somehow, deactivated, Iterativity would still not be irrelevant.the most prominent syllable in the L1 or the L2; just like the L1 would stress the initial syllable in such a word, the L2 would place prominence on the final syllable (see Section 2).The fact that these subjects are stressing the penultimate syllable is, thus, clear evidence that they make changes to their grammar on a parameter-by-parameter basis, instead of randomly increasing the number of finally stressed/prominent words in their outputs in light of the input they receive in the L2 (i.e.finally prominent words).That is, this knowledge could not have been acquired via language transfer alone or L2 input alone (see Finer & Broselow 1986 for the same argument from syntax; see also Eckman 1991;Archibald 1992;1993;1995;Eckman & Iverson 1994;Carlisle 1997;1998, for similar arguments from phonology).In other words, a learner at this stage will be able to stress not only word-final syllables that end in a coda consonant (as in Stage 1 above -see representations (c) and (d)), but also those ending in a vowel and not preceded by a closed syllable (as in representation (a) -see Figure 6), resulting in final stress roughly about 75% of the time.This analysis is one that is relatively easy to arrive at under the PAPH: The learner must reset two existing terminal parameters (Extrametricality and Head, see 23), but he or she does not have to make any previously set parameters (de facto) inactive: Stage Since, in this grammar, all final closed syllables (50% of the stimuli, see Tables 1-2), as well as all final open syllables preceded by open syllables (25% of all stimuli), will be stressed, we would expect learners at this stage to have final stress roughly 75% of the time, with the remaining stimuli stressed on the penult (as with (b) in Figure 6), and with no stimuli stressed on the antepenult.As Table 7 below indicates, this prediction was borne out, as the percentage of finally stressed words was very close to 75%.In addition, as expected, very few trisyllabic words bore primary stress on their initial (antepenultimate) syllable, because of the two-syllable stress window expected to exist at the right edge at this stage.A one-way ANOVA was conducted to test or statistical significance.For trisyllabic words, the difference between the three syllable locations' stress attraction rates was significant, (F = (2, 18) = 86.347,p < 0.0001).In addition, and as was expected, this difference, as indicated by the results of a Tukey HSD test, was because the proportion of syllables with final, penultimate and antepenultimate stress were all statistically different from each other (p < 0.001 for each of the three pairs, i.e. antepenult-penult, antepenult-final, penult-final).Similarly, regarding bisyllabic stimuli, the differences in terms of stress attraction rates between the two syllables (penult and final) were statistically significant, (F = (1, 12) = 156.141,p < 0.0001). As mentioned above, at this stage, words ending in a light syllable immediately preceded by a heavy/closed syllable (e.g.yumurta) still bear non-final (primary) stress.This is due to the fact that Weight-Sensitivity is still set to Yes in the grammars of these learners; as such, a heavy syllable, when available, must be stressed.(Thus, 98.60% of final, 95.65% of penultimate, and 96.40% of initial heavy syllables received primary or secondary stress in trisyllabic stimuli.)To put it another way, Weight-Sensitivity, which helped learners achieve some finally stressed words in the previous stage when the grammar was trochaic, prevents them from having final stress in all of the cases at this stage. The 'logical' next step, for a cognitively driven grammar, would therefore be to reset Weight-Sensitivity also from Yes to No, and have final stress for all word types.No such learners were however found to exist in this study; none of the subjects had a weight-insensitive iambic grammar, which, I believe, is because such grammars are not permitted by the universal inventory of foot types, as was discussed in much previous formal phonological literature (see e.g.McCarthy & Prince 1986;Hayes 1995, among others).Instead, as will be seen in the next section, some learners lengthened word final open syllables, thereby turning them into heavy syllables, which could be stressed on a weight-sensitive grammar. Stage 3: Reset Extrametricality and Head (Stage 3) plus word-final open syllable lengthening The problem with the previous stage was with final Heavy-Light (HL) sequences (25% of the stimuli) since, given Weight-Sensitivity, these are stressed on the H, even in an iambic grammar.As mentioned above, however, there were no learners who reset Weight-Sensitivity to No. Instead, some learners lengthened word-final open syllables, much like resorting to 'iambic lengthening' (Hayes 1995) thereby changing word-final light syllables into heavy, which meant that these syllables could be stressed on a weight-sensitive iambic grammar: As expected, almost all of these learners' utterances were stressed on the final syllable, with nearly no words stressed on the penult or the antepenult, as is indicated in Table 8. A one-way ANOVA was conducted to test for statistical significance.For trisyllabic stimuli, the results confirmed that the difference regarding stress attraction between the three syllables was statistically significant, (F = (2, 36) = 818.208,p < 0.0001).Further, this difference, as indicated by the results of a Tukey HSD pair test, was, as expected, because the proportion of the finally stressed words was different (much greater) from the proportion of words stressed on the antepenult or penult (p < 0.0001 for both pairs), although the pairs 'penultimate' and 'antepenultimate' were not statistically different from each other (p = 0.868).Similarly, for bisyllabic stimuli, the difference between finally stressed words and words with penultimate stress was statistically significant, (F = (1, 24) = 2090.667,p < 0.0001). In sum, learners at this stage were able to consistently place stress on the final syllable of Turkish words, unlike those at previous stages, but were still unable produce Turkish words with only one prominent syllable (see e.g.(b) and (c) in Figure 7), as Iterativity was still set to Yes in their grammars.At this stage, learners would be making four changes in their grammar: Extrametricality from Yes to No, Foot-Head from Left to Right, word final lengthening and, crucially, Iterativity, a non-terminal parameter, from Yes to No. Out of the four general grammars covered so far, this last one should be the most difficult to employ on the PAPH, requiring a change that involves a parameter with embedding (see ( 23)), which is hypothesized on the current proposal to be highly costly. As such, a change in Iterativity alone was not expected by itself; it is a change that is predicted to come only after some of the easier changes have been made in the grammar; it should, thus, appear together with a set of other changes, as is exemplified in Figure 8. This grammar should, therefore, be one that is resorted to only after other simpler options are rendered non-optimal by L2 learners in capturing final prominence and lack of secondary stress in Turkish.It was predicted that only advanced or near-native learners should consider this. These predictions were indeed borne out.Only two advanced learners were able to reset Iterativity from Yes to No.Both of them were among the most advanced learners tested, and one was in fact the most advanced among all the learners tested who had spent significant time in Turkey.Only 19.17% of the words they produced had secondary stress, although almost all of their outputs, except those in the LL and LH conditions, could have accommodated more than one foot, especially given that they had final lengthening (as with learners at the previous stage), thereby increasing the number of moras in a word and thus the possibility of multiple stresses.Given the many iterative forms (and many that lack Iterativity), it could also be stated that Iterativity was in the process of being reset from Yes to No for these learners.In terms of the percentage of syllables stressed, the results are summarized in Note that at this stage, unlike stages 0, 1, 2, and 3, learners are finally able to correctly get rid of Iterativity altogether, 17 i.e. even for longer utterances, because there will be a single foot at the right edge for any given word.On the other hand, this may or may not mean that they were able to rid their grammar of the Foot, as their outputs may still be different from the actual Turkish grammar in that the acoustic correlates of final prominence would include duration (and possibly intensity), in addition to F0.In fact, this is what was observed in the current experiments for most of their outputs, but a more detailed account of whether very advanced (or near-native) learners of Turkish can move on to yet another stage and additionally rid their grammar of the Foot is beyond the scope of this paper. Discussion First and foremost, the results bear out the hypotheses generated by the Prosodic Acquisition Path Hypothesis (PAPH) proposed in this paper (see (25) in Section 3.3).In the most general terms, the English-speaking subjects went through a number of stages, consistent with the path proposed in this paper, making changes towards restructuring their interlanguage grammar on the basis of the L2 input, although the representations they entertained are not explicable on the basis of L2 input alone (see below).Changes resulting from 17 Another strategy that gets rid of Iterativity altogether would, of course, be to reset Boundedness from Yes to No, instead of resetting Iterativity.Such a strategy would, however, be even more difficult to employ, for Boundedness has even more parameters dependent on it than Iterativity does in the tree in (23).Determining, based on an output form, which of the two strategies is being employed by a given learner is nearly impossible, however; nonetheless, structurally speaking, changing Boundedness from Yes to No (and keeping everything else the same as in Stage 4) would result in Interlanguage representations such as arabá, tebeşír, yumurtá (compare these with the Interlanguage forms in Figure 8 that result from a change in Iterativity).terminal parameters being reset (e.g.Extrametricality and Headedness) were employed first; changes that involved parameters with embeddings, such as Iterativity, were only made at later stages, and by the most advanced learners.Further, as expected, no Englishspeaking learner seemed to be able to rid their grammar of the Foot, but two learners, those with the highest level of proficiency (the two who reset Iterativity), had some outputs that could be analyzed as footless, presenting partial evidence against the strong form of the proposal that once projected, the Foot will never be expunged from the grammar.Perhaps, the Foot can also be lost, but only at near-native levels of proficiency.Further examination of this possibility will have to wait for future research on near-native speakers, and optimally involve an investigation of the acoustic correlates of stress/prominence, too. All terminal parameters whose being reset served to make the interlanguage grammar sound more native-like, were, in the end, reset by the English-speaking learners, with the exception of Weight-Sensitivity, which, though terminal, was not reset (more on this below; see also the discussion at the end of Section 5.2.3).We have seen, for example, all English-speaking learners, except for the nine with the lowest level of proficiency, reset Extrametricality (from Yes to No).We have also seen several learners additionally reset Headedness (from Left to Right).Both are terminal parameters, and resetting both served to better account for the L2 input, thereby making their grammars sound more native-like.On the other hand, although resetting parameters with embeddings, such as Iterativity and Boundedness, would also have served to make their interlanguage grammar appear more-native like, these strategies were not employed by the vast majority of the English-speaking learners, consistent with the PAPH claim that these parameters should be very difficult to reset from their Yes to No settings (see ( 25)).Only two English-speaking subjects, those with the highest level of proficiency in Turkish, were able to reset Iterativity from Yes to No, and even for them, the parameter had not completely been reset to its No setting; there were still several utterances (about 20%, see Section 5.2.5) that were compatible with the Yes setting of this parameter.That is, their grammar was most probably still undergoing transition with respect to the "correct" setting of this parameter. On the face of it, the finding that Weight-Sensitivity was not reset by the English-speaking learners from its Yes to No setting, seems to be in conflict with the PAPH; after all, all terminal parameters should be easy to reset according to the PAPH, and there seems to be no reason, if one can reset two terminal parameters, as was done by the Stage 2 learners, why resetting a third one, Weight-Sensitivity, should be impossible, at least at a later stage in development.In fact, that would make the grammar more symmetric on the surface, since all words would, then, end in a stressed syllable, no matter what weight the final syllable has, and the ambient input would better be accounted for by this analysis.Thus, there seems to be no obvious reason why some advanced learners would not follow such a path. One could claim that the reason why no such pattern was observed is simply due to chance, i.e. that it could have arisen if more learners had been tested. 18I will not follow this line of thinking, however; I will, instead, take a different, stronger, position, and will argue that the PAPH interacts, as any theory of language acquisition should, with other 18 One could also argue, as alluded to in Section 3.2, that Weight-Sensitivity is not, in fact, a terminal parameter, but that it has more structure underneath it since, in all weight sensitive languages in which codas are weight-sensitive, long vowels are weight-sensitive, too, but not vice versa.That is, it could be argued that Weight-Sensitivity is, in fact, Weight-Sensitivity-to-Vowels (with the settings of Yes and No), under the Yes setting of which there is Weight-Sensitivity-to-Codas (again with Yes or No settings).This will capture the implicational relationship between being sensitive to the weight of long vowels vs. codas observed in world languages.For L2 acquisition, then, one could argue that Weight-Sensitivity is difficult to reset because it has embeddings underneath.But this would not explain why Weight-Sensitivity-to-Codas, which would, under this analysis, be a terminal parameter under the maximal parameter of Weight-Sensitivity-to-Vowels, cannot be reset from Yes to No, either.Clearly, putting more structure under Weight-Sensitivity is not the solution. factors in determining the level of difficulty for L2 learners, the other factor being language universals in this case. The PAPH is as much a learning theory as it is a UG-based approach aiming to constrain our predictions of what L2 learners will and will not do.Its restrictive power comes both from the learning principles employed and the UG constrained principles and parameters assumed to hold of natural languages.It is the learning principles that predict terminal parameters to be easier to reset than parameters with embeddings, for example Iterativity.UG does not, otherwise, preclude a language from setting Iterativity to No. Noniterative footing is an option observed in many languages of the world (e.g.Southeastern Tepehuan; Kager 1997;1999), but is rendered difficult according to the PAPH for learners coming from a language with a Yes setting. On the other hand, once Headedness has been reset from trochaic to iambic (i.e.left-to right-headed), it is UG principles that rule out the option of resetting Weight-Sensitivity from Yes to No; otherwise, if the PAPH held true by itself, without being limited by language universals, all terminal parameters should be equally easy to reset from their Yes to No settings, as long as there is positive evidence triggering such a change.And in the case of Weight-Sensitivity, resetting it from Yes to No, for learners with iambic grammars such as those at Stage 2 and later, would, in fact, lead to higher levels of success with respect to final stress/prominence.That this was not done, neither by the Stage 2 learners nor by any of the subjects at later stages (who instead lengthened final light syllables -compare Figures 6 and 7, for example), despite being predicted by the PAPH and despite being a very reasonable option from a cognitive point of view, is due to UG precluding such an option.That is, a strategy could be predicted to be easy on the PAPH, yet is not chosen since it is not permitted in the inventory of options allowed by UG.This is the inverse of the preceding scenario, that an option could be permitted by UG (e.g.Iterativity-No), but is too difficult to adopt given the learning principles integral to the PAPH. If there is a universal foot inventory that excludes quantity-insensitive iambs, it is not surprising that no learner tested in our experiments followed such a path.This is despite the fact that doing so would be pedagogically and cognitively reasonable.All final syllables would, after all, be stressed, and the ambient input would be better accounted for.Therefore, this finding seems to present strong evidence for the view that interlanguages are UG-constrained (see e.g.White 1989b;2003a). One might wonder, at this point, why no weight-insensitive trochees were, then, observed in the current study.If resetting Weight-Sensitivity, a terminal parameter, was not allowed in the grammars of the English-speaking subjects with an iambic grammar due to a linguistic universal prohibiting such an option, why is it that it was not done by the English-speaking subjects with a trochaic grammar, either?The answer to this question is rather simple.Though Weight-Sensitivity is a relatively easy parameter to reset, and its being reset is permitted -provided that the language is trochaic -by whatever linguistic universals are responsible for the foot inventory available to speakers, there was no reason for the learners with trochaic grammars to reset it, given the ambient input.In fact, Weight-Sensitivity being reset would result in a decreased success rate among these learners, which were discussed under Stage 1 above; final syllables would, then, never be stressed, even when they ended in an H. The question remains, however, as to how learners would behave in moving from a weight-sensitive trochaic language like English to a weight-insensitive trochaic language like Gooniyandi or Greek.Given the PAPH, there should be no difficulties in making such a change in the grammar, since Weight-Sensitivity is a terminal parameter (and since UG does not preclude weight-insensitive trochees).However, the prediction remains to be tested. Just like the results of this study are informative regarding UG in that UG-disallowed options were not employed by the learners, they are also informative in that the options the subjects did entertain all corresponded to what is actually observed in natural languages of the world.For example, just like Stage 0 corresponds to English, Stage 1 corresponds to languages like Tol (Fleming & Dennis 1977) and Bergüner-Romansh (Kamprath 1987), both right-to-left trochaic languages without Extrametricality.It is interesting in this regard how learners' outputs at this stage were more similar not to the target language or the native language, but instead a language like Tol in which they never received input, producing outputs like /arába/, with stress on the penult, which is neither like the L1 nor the target language.As such, such outputs are inexplicable unless they are available to the learners as part of UG (see Finer & Broselow 1986 for the same argument from syntax; see also Mairs 1989;Archibald 1992;1993;1995 for similar arguments from phonology).The other stages also corresponded to natural languages: Stage 2 was quite like Aklan (Hayes 1981), andTiberian Hebrew (McCarthy 1979) (see also Hayes 1995).Stage 3 is just like the various dialects of Ojibwa (see e.g.Bloomfield 1939;Piggott 1980;1983).And Stage 4 is quite like Farsi Persian (Toosarvandani 2004;Hosseini 2014). All things considered, the results of the current study provide strong evidence for the PAPH.This is particularly true when individual results/learner grammars (Section 5.2) are taken into consideration.Group results (Section 5.1) are informative only to the extent that they illustrate the range of variation observed for final stress among the subjects, implying that different options are being considered, some of which yield final stress more often than others.It is through the individual results that we obtain some insight into the grammars of each English-speaking subject, given that a certain output form with a certain stress pattern could be indicative of different parameter settings for different subjects.For example, it was predicted, and later found, that learners with trochaic and iambic grammars demonstrate the same behaviour on the surface for some conditions in the experiments.For example, LH and LHL words were both stressed on the H, with the former resulting in final and the latter non-final stress, by both trochaic and iambic English-speaking groups, since both groups had Weight-Sensitivity set to Yes.The differences between the grammars of the two groups of learners (i.e.trochaic vs. iambic) could have been obscured if each learner had not been analyzed separately based on their overall behaviour and if only a few conditions had been included, instead of testing all 12 possible combinations of open and closed syllables (see Tables 1 and 2). Conclusion The focus of previous research in L2 acquisition of stress/prominence has almost exclusively been on English (see e.g.Archibald 1993;Pater 1997;Tremblay 2006).The issue has been whether learners of English-type languages are able to stress the correct syllable or not, and crucially, not whether learners of fixed stress languages like Turkish (or French) can manage not to stress any (or many) syllables.Focusing on Turkish, I have presented a general, comprehensive account of the L2 acquisition of word-level prosody in this paper, an account that considers most parameters proposed in the literature that arise from having foot structure, as well as the Foot itself, in an effort to account for the L2 acquisition of stress in natural language.The result is the prediction of a "path," or, rather, several different acquisition paths, depending on the L1 and L2 involved. In the most general terms, the PAPH can be summarized under two major points, some of which were tested in this paper, with English-speaking learners of Turkish; others require further research with different L1s and/or L2s: (i) It is impossible to deactivate a parameter altogether, but de facto deactivation, by means of resetting a parameter which embeds the relevant parameter from Yes to No, is possible, though highly difficult; (ii) Some parameters are easier to reset than others, depending on whether they are terminal or not in the parameter tree proposed in (23), or depending on whether the change is from No to Yes, or Yes to No. To give an example, English-speaking learners of Turkish were predicted to have greatest difficulties in ridding their grammar of foot structure (as Footedness is the parameter with the greatest number of embeddings in the tree), and they were predicted not to be able to deactivate certain foot-related parameters that are irrelevant in L2 Turkish.They were, on the other hand, hypothesized to be able to reset the values of some of the parameters in their L1, on the basis of positive evidence (some easier than others), and consequently construct several different interlanguage grammars, grammars that are neither like the L1 nor the L2, but are possible grammars attested among the natural languages of the world. In order to test these predictions, a production experiment was constructed, with various conditions, so as to determine the relevant aspects of learners' prosodic grammars (see Section 4).In particular, all possible Light-and Heavy-syllable combinations were generated in the bisyllabic and trisyllabic stimuli used in the experiment, amounting to in total 4 conditions of bisyllabic and 8 conditions of trisyllabic stimuli.It was shown to be crucial to have as many conditions as possible, as it is nearly impossible to determine the correct setting of a parameter through looking only at a few word shapes, as the same behavior on the surface can be caused by the interaction of several different parameters.This, to my knowledge, is the first experiment in the literature to consider words of all possible weight profiles. The results of the experiments presented in Section 5 lend strong support to the PAPH.In particular, no L1 English-speaking learner of L2 Turkish seemed to able to produce footless outputs, or had fully reset the Footedness parameter (except partially for the two most advanced learners who appeared to potentially have some footless outputs); their productions all involved foot structure.This is in contrast to adding the Foot, which does not seem to be nearly as difficult, as is implied by the findings of research with Frenchspeaking learners of English (Pater 1997;Tremblay 2007).Although, similar to Turkish, French is also arguably a footless language (e.g.Özçelik to appear), in which prominence regularly falls on the final syllable of a PPh, French-speaking learners of English are able to not only produce footed utterances, but also have the correct setting of most of the parameters of stress, and thus, produce iterative binary trochees. Although the learners of Turkish tested in this study were not able to produce footless utterances, they did not, at the same time, use the L1 settings of all foot-related parameters; they reset several parameters, in order to better accommodate the L2 input, and in doing so, they demonstrated stage-like behavior (see Section 5.2), along the lines predicted by the PAPH.A great majority of the subjects were, for example, able to reset Extrametricality, a terminal parameter, from Yes to No, thereby producing some words with final stress.Words ending in an H were, in particular, stressed on their final syllable by these learners, for Weight-Sensitivity, in their grammar, was still set to Yes, as in the L1.Several subjects were also able to reset Foot-Head from trochaic to iambic, in addition to resetting Extrametricality, and were, thus, able to stress a higher number of syllables in word-final position, though this did not result in 100% final stress, given that Weight-Sensitivity was still set to Yes, resulting in non-final stress in final HL sequences.At the next stage were learners who not only reset Extrametricality (from Yes to No) and Foot-Head (from trochaic to iambic) but also lengthened final open syllables, thereby rendering them heavy, and thus achieving final stress at all times.None of the learners with iambic grammars, though, reset Weight-Sensitivity from Yes to No, although this would also have resulted in all words being stressed on their final syllables, as with final lengthening, and with the additional benefit of not violating Faithfulness to underlying (target) vowel length.Finally, only two subjects, two of the most advanced subjects tested, were able to utter words without secondary stress, providing evidence that Iterativity, a nonterminal parameter, i.e. one with embeddings, is indeed a very difficult parameter to reset, as was predicted by the PAPH. Note that this research also contributes to our understanding of variability in L2 production, a topic that has generated much debate both in L2 syntax/morphology (see e. 1976;Tarone 1980;1983;Tropf 1987;Eckman 1991;2004;Broselow et al. 1998;Hancin-Bhatt 2000;Major 2001;Lombardi 2003;Broselow 2004).Focusing on prosodic variability, the current research suggests that what may look like random phonological variation on the surface can be accounted for via transfer of L1 prosodic representations and a well-defined process of moving away from these representations on a path where options are constrained by UG. Many predictions can be made based on the PAPH, some of which we were able to test in the L1 English/L2 Turkish setting.Other predictions go beyond those tested in this paper, and could better be tested with different L1/L2 pairs.Take, for example, the prediction that resetting terminal parameters is easier than resetting parameters with embeddings.Though we did find some evidence of this in our experiment (Iterativity was, after all, very difficult to reset, unlike Extrametricality or Foot-Type), it is not clear if all terminal parameters are easier to reset than their non-terminal counterparts.In this paper, Weight-Sensitivity was not reset from its Yes to No setting, despite being a terminal parameter.We have linked this finding to two facts: (i) weight-insensitive iambs are not permitted by UG, so if interlanguages are natural languages, it is expected that the learners with iambic grammars will retain Weight-Sensitivity, and (ii) there was no reason for the learners with trochaic grammars to have a weight-insensitive system, since that accounts for the L2 input less well than the weight-sensitive trochees that are readily available from the L1.In order to further test the predictions based on the PAPH in this regard, it would be optimal to have an experimental setting with two trochaic languages, one with weight sensitivity, another without, and see, among other things, if learners can reset this parameter from Yes to No. Further, such a study, if bidirectional, could help us see if moving from No to Yes is indeed easier, as predicted by the PAPH, than vice versa. One can test, for example, English-speaking learners of Greek, which has weight-insensitive trochees (Malikouti-Drachman & Drachman 1989;Drachman & Malikouti-Drachman 1999;Revithiadou 2004), as well as Greek-speaking learners of English.Both languages are trochaic, while the two differ in that whereas English is weight-sensitive, Greek is not.It should be possible for English-speaking learners of Greek to reset Weight-Sensitivity from Yes to No, since weight-insensitive trochees, unlike weight-insensitive iambs, are not ruled out by the universal foot inventory allowed by UG.Further, such a change should be rather easy, according to the PAPH, since Weight-Sensitivity is a terminal parameter.Finally, such a bidirectional study would allow us to test another prediction of the PAPH, that moving from Yes to No is more difficult than moving from No to Yes.Though both groups of learners should be able to reset the relevant parameter, the task should be easier for Greek-speaking learners of L2 English, for they are going from No to Yes, whereas English-speaking learners of L2 Greek are moving from Yes to No. Finally, it should be noted that much of what was argued in this paper depends on a hierarchical tree representation of metrical parameters presented in (23).Certain parts of the tree could possibly be modified however.For example, it might be better to somehow tie together foot head and weight-sensitivity, for we know that iambic languages are always weight-sensitive, whereas trochaic languages can be weight-sensitive or not (see e.g.Hayes 1995;but see Alshuler 2009).Similarly, some of the parameters could have more structure underneath; Weight-Sensitivity, for example, can be characterized as Weight-Sensitivity-to-Vowels, under which exists Weight-Sensitivity-to-Codas, since all languages that are sensitive to coda weight are also sensitive to the weight of long vowels, but not vice versa.I leave further elaboration of the tree to future research. In conclusion, the L2 acquisition data investigated in this paper presents evidence for the Prosodic Acquisition Path Hypothesis that was proposed here, as well as for UG-based theories of second language acquisition in general.At the same time, several aspects of this hypothesis, for which I presented formal and L1 acquisition-based arguments, remain to be tested by future experimental research with L2 learners of various language pairings. : Left | Right All of these parameter settings are illustrated with the following representation for the word Àpalàchicóla: right-to-left iterative binary (moraic) trochees with Extrametricality set to Yes and End-Rule set to Right: (12) 2: Reset Extrametricality (Stage 1) and Head from 'Left' to 'Right' The word types that were problematic for learners at Stage 1 were all of those ending in a final light (CV) syllable (see e.g.representations (a) and (b) in Figure 5).In addition to resetting Extrametricality as in Stage 1, some learners reset Foot-Head Direction from Left to Right (i.e. from Trochaic to Iambic).This single additional change resulted in final stress not only in cases with final closed syllables, but also in half of the cases where words end in an open/light syllable, i.e. those where a final open syllable is immediately preceded by another open syllable. g. Lardiere 1998a; b; Ionin & Wexler 2003; White 2003b; Ionin, Ko, & Wexler 2004 for different accounts of variability) and L2 phonology (e.g.L. Dickerson 1975; W. Dickerson : Furthermore, as (17) and (18) illustrate, each time a suffix is added to the word, whether it is derivational as in (17) or inflectional as in (18), word accent moves to the right, meaning that it always falls on the final syllable of the word, regardless of the morphological composition of the word: ): Table 4 below summarizes the results by syllable location, in terms of percentage stressed:
2019-05-10T13:04:28.237Z
2016-08-23T00:00:00.000
{ "year": 2016, "sha1": "ab6dc596daaab746f07ed0f9157ecb93aeabe3e9", "oa_license": "CCBY", "oa_url": "https://www.glossa-journal.org/article/id/4839/download/pdf/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3863a30a9dcfaaa37451f143962d236bfb4781c7", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Computer Science" ] }
5178934
pes2o/s2orc
v3-fos-license
Cutaneous metastasis as the first manifestation of occult malignant breast neoplasia* Cutaneous metastases from primary internal malignancies represent 0.7-9% of patients with cancer. We report a 65-year-old female patient referred for evaluation of normochromic papules on the trunk and upper limbs that had been present for three months. A skin biopsy revealed diffuse cutaneous infiltration by small round cell tumors. Immunohistochemistry was positive for AE1/AE3, CK7, estrogen receptor and mammaglobin. The final diagnosis was cutaneous metastasis of occult breast cancer, since the solid primary tumor was not identified. The location of the primary tumor can not be determined in 5-10% of cases. In these cases, 27% are identified before the patient’s death, 57% at autopsy, and the remaining 16% can not be located. INTRODUCTION Cutaneous metastasis is defined as a neoplastic lesion affecting the dermis or the subcutaneous tissue that originates from another primary tumor. 1 Three basic patterns of metastasis mechanisms are reported: mechanical tumor stasis (anatomical proximity and lymphatic drainage), organ-specific (selective affinity of tumor cells to a specific organ), and nonselective (independent of mechanical and organ-specific factors). 1 Malignant neoplasms that most commonly metastasize to the skin include breast cancer, colon cancer, melanoma, lung cancer, ovary cancer, sarcomas, and cervical cancer. 1 In most cases, cutaneous metastasis develops after the diagnosis of the primary internal malignancy and late in the course of the disease. An interval of five years from the initial diagnosis to the skin metastases is common. 2 0.7-9% of patients with cancer develop skin metastasis, which is considered a rare dermatological event. 2,3 However, with the increased incidence of internal cancer, dermatologists may be the first to discover the disease. 2 A high index of clinical suspicion is essential for the diagnosis of cutaneous metastatic lesions. 3 CASE Who was referred to our institution for evaluation of asymptomatic papules and nodules on the trunk and upper limbs that had been present three months before the consultation. The patient was unable to report the initial morphology or changing pattern of the lesions. She also reported weight loss, which was not measured, and asthenia. Remarkable personal history included anemia treated with ferrous sulfate and a sectorectomy of a benign left breast lump eight years before -which was anatomopathologically confirmed. The patient was G4P3A1 and had been in menopause for 16 years. She denied alcohol abuse, smoking, or remarkable family history. The lesions were slightly movable, 0.3-1 cm in diameter, located on the arm root, chest and back. We also observed a linear pearl-col- Positive H. pylori • Normal chest X-ray, mammography, tomography of the abdomen/pelvis, and colonoscopy. 2005 and identified no neoplasias. The patient was then referred to an oncologist for follow-up and chemotherapy. Unfortunately, she died 10 months after the diagnosis. DISCUSSION The frequency of cutaneous metastases has increased due to higher cancer survival rates and better therapeutic alternatives. 4 The most common sites of metastasis (75%) are the scalp, navel, chest wall, and abdomen, and in 75% of women, they occur on the chest and abdomen. In women, the most common primary malignancy is breast cancer (69%), which tends to metastasize later to the anterior thoracic wall. 3 10 Breast cancer immunohistochemistry reveals a cytokeratins pattern of CK7+/CK20-. Estrogens and progesterone receptors are markers that increase the detection sensitivity of breast cancers. 7 Despite the imaging techniques and immunohistochemistry, the primary tumor location cannot be determined in 5-10% of cases. In general, patients with metastatic carcinoma of unknown primary site have a worse prognosis. In these patients, the primary tumor is only identified in 27% of cases before death; 57%, at autopsy; and for the remaining 16%, primary tumor can not be identified. 8 q
2017-10-21T03:31:12.718Z
2016-09-01T00:00:00.000
{ "year": 2016, "sha1": "8d5b1b8756aa4001ef5f71d333575275c87c38ef", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/abd/v91n5s1/0365-0596-abd-91-05-s1-0105.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8d5b1b8756aa4001ef5f71d333575275c87c38ef", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248344867
pes2o/s2orc
v3-fos-license
Accuracy of full-arch digitalization for partially edentulous jaws — a laboratory study on basis of coordinate-based data analysis Objectives To compare the accuracy (trueness and precision) of direct digitization of four different dental gap situation with two IOS (intraoral scanner). Materials and methods Four partially edentulous polyurethane mandible models were used: (1) A (46, 45, 44 missing), (2) B (45, 44, 34, 35 missing), (3) C (42, 41, 31, 32 missing), and (4) D (full dentition). On each model, the same reference object was fixed between the second molars of both quadrants. A dataset (REF) of the reference object was generated by a coordinate measuring machine. Each model situation was scanned by (1) OMN (Cerec AC Omnicam) and (2) PRI (Cerec Primescan AC) (n = 30). Datasets of all 8 test groups (N = 240) were analyzed using inspection software to determine the linear aberrations in the X-, Y-, Z-axes and angular deviations. Mann–Whitney U and two-sample Kolmogorov–Smirnov tests were used to detect differences for trueness and precision. Results PRI revealed higher trueness and precision in most of the measured parameters (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\overrightarrow{V}}_{E}$$\end{document}V→E 120.95 to 175.01 μm, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{V}_{E}$$\end{document}V→E(x) − 58.50 to − 9.40 μm, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{V}_{E}$$\end{document}V→E (z) − 70.35 to 63.50 μm), while OMN showed higher trueness for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{V}_{E}$$\end{document}V→E (y) regardless of model situation (− 104.90 to 34.55 μm). Model D revealed the highest trueness and precision in most of the measured parameters regardless of IOS (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{V}_{E}$$\end{document}V→E 120.95 to 195.74 μm, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{V}_{E}$$\end{document}V→E (x) − 9.40 to 66.75 μm,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{V}_{E}$$\end{document}V→E (y) − 14.55 to 51.50 μm, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\overrightarrow{V}_{E}$$\end{document}V→E (z) 63.50 to 120.75 μm). Conclusions PRI demonstrated higher accuracy in the X- and Z-axes, while OMN depicted higher trueness in the Y-axis. For PRI, Model A revealed the highest distortion, while for OMN, Model B produced the largest aberrations in most parameters. Clinical relevance Current results suggest that both investigated IOS are sufficiently accurate for the manufacturing of tooth-borne restorations and orthodontic appliances. However, both hardware specifications of IOS and the presence of edentulous gaps in the dental model have an influence on the accuracy of the virtual model dataset. Introduction Amongst others, the accuracy of indirect prosthetic restorations is determined by the accuracy of reproduction of the clinical situation. Even though conventional impression techniques have been successfully applied in dentistry for the past century, digital impression technologies dominate the modern era of patient rehabilitation, with the intraoral scanners (IOS) in the forefront [1]. Examinations of the performance of different IOS however vary significantly in their outcomes, possibly due to discrepancies in software versions, calibration, operator experience, study design, and evaluation method [2,3]. Intraoral scanning devices utilize optical measuring principles to digitize the oral anatomy. In essence, many single images are captured by an intraoral camera and consequently stitched together with the use of a software algorithm to generate a digital model. Image overlap is however prone to errors inherent to the iteration process, which accumulate as the number of superimposed images increases, causing the overall error in the final data. This superimposition error affects the accuracy of the digital dataset and has been theorized to be dependent on several factors including the iteration algorithm, optical technology, size, and number of captured images, scanning path, distinctiveness of the captured surface, and operator experience [2]. Predominantly two different methods have been described for the assessment of the accuracy of digital models, namely, the calculation of surface differences after dataset superimposition and the metrical analysis and comparison of reference geometries [4]. Limitations of dataset superimposition with best-fit algorithms have been widely discussed, including error underestimation arising from the alignment of datasets in a most optimal position and inaccuracies generated by the iterative algorithm [5,6]. Nonetheless, a highly accurate dataset of the clinical situation required for the calculation of reference geometries is usually difficult to obtain under in vivo conditions. In the available literature, reference geometries are mostly employed for the examination of either fully dentate arches (spheres, metal bars) or completely edentulous situations (scan bodies) [4,[7][8][9][10]. Recent studies have concluded direct digitization with IOS of single teeth, quadrants, and hemi-arches to be equivalent to or even more accurate than conventional techniques [4,5,[11][12][13], while differing opinions and data exist on the accuracy of complete arch scans [14][15][16][17][18]. To date, little is known about the influence of edentulous areas (gaps) on the accuracy of intraoral scanning. Several authors report lower accuracy when edentulous arches are directly digitized and have concluded optical impressions of edentulous areas to be more challenging due to the lack of distinctive anatomical features and mucosal mobility [19][20][21]. Yet very few studies have investigated the performance of IOS on partially edentulous dentitions [22,23]. Therefore, the current in vitro study attempts to compare the trueness and precision of the direct digitization of four different dentitions with two IOS. The null hypotheses were that according to accuracy, there will be (H0_1) no quantitative differences between the two IOS and (H0_2) no differences between the different model situations representing different patterns of missing teeth. Testing models Four polyurethane mandible models (AlphaDie MF, LOT 2,012,008,441; Schütz Dental GmbH, Rosbach, Germany), each displaying a different partially dentate situation, were used as testing models ( Fig. 1 The same straight metal reference bar, made of stainless steel (GARANT, DIN 875-00-g; Hoffmann Group, Munich, Germany), was fixed between teeth 47 and 37 in each model. The surface of the bar was matt as a result of the manufacturing process (Fig. 2). Reference measurement and dataset of the bar To determine the reference values of the metal bar, a measurement was performed using a coordinate measuring machine (CMM: Mitutoyo Crysta Apex C754; Createch Medical Mendaro, Spain; software: MCOSMOS Mitutoyo Software; Mitutoyo, Neuss, Germany) at a temperature of 20 °C before placing it in the model. The machine uses a 0.5 mm spherical ruby probe to measure the x-, y-, and z-coordinates of surface points on the bar. The maximum permissible error (MPEe) of the CMM is 1.9 microns + (3*L/1000) where L is the real bar length [24] and is calculated using the following formula: MPEe = [k + (multiplier * L)/1000] μm (k is the systemic or inherent lengthindependent error of the machine; multiplier is a constant that defines the travel dependent error, and L is the length of travel in millimeters). The generated surface tessellation language (STL) dataset was imported into the inspection software (Geomagic Control 2015; Version: 2015.1.0.1919, Geomagic, Morrisville, MC, US) and analyzed analogously to the method described below for the test datasets. The reference length of the bar (R) and was measured to be R = 50.4452 mm. Scanning of the testing models The four polyurethane mandible models including the reference bar were digitized with two intraoral scanners (n = 30/group). Both IOS were calibrated prior to each scanning session and after each five scans. All scans were performed according to manufacturer's specifications, by the same experienced operator in the same location under ambient room lighting conditions, using the extraoral data acquisition mode. The same scanning strategy was employed for every scan, starting at tooth 48 and moving along the occlusal surface to tooth 38 then proceeding along the lingual surface back to tooth 48. Scanning concluded with the capture of the vestibular side of the dentition from the fourth to the third quadrant. During each scan, it was ensured that the opposing ends of the metal bar were not connected in the generated virtual dataset. The resulting STL datasets were post-processed and directly exported from the devices. Analysis of datasets All datasets of both analysis groups were trimmed and equally oriented into the virtual coordinate system of Geomagic Control software, where the XY-, XZ-, and YZ-planes represent the coronal, transverse, and sagittal dimensions respectively (Fig. 3). Afterwards, the following features were generated using the software function "Contact Feature": Vectors �� ⃗ V 3 and �� ⃗ V 4 were created at the intersection of the anterior and posterior planes in the third and fourth quadrant respectively (Fig. 6). Furthermore, the points P3 and P4 were defined as the crossing points of �� ⃗ V 3 with B3 and �� ⃗ V 4 with B4. In addition, the plane B4 was parallel shifted by 50.4452 mm in the direction of the third quadrant creating B3′ and the resulting meeting point of B3′ with the vector �� ⃗ V 3 was named Point P3′ (Fig. 7). The point coordinates of P3, P4, and P3′ and vector coordinates of �� ⃗ V 3 and �� ⃗ V 4 were imported into Microsoft Excel (Version 1902, Microsoft Corporation, Redmond, U.S.). For the evaluation of the linear shift in the x-, y-, and z-axes, the vectoral error ( �� ⃗ V E ) was calculated between point P3′ and P3 using the following formula (x, y, and z are the coordinates of the vectors in the x-, y-, and z-axes): To assess the degree of the spatial distortion between the two halves of the bar, initially, the overall angle between vectors V3 and V4 was calculated using the following formula (X, Y, and Z are the vector parameters in the x-, y-, and z-axes): Moreover, the projection of α overall on the XY-plane (α coronal ) and the XZ-plane (α horizontal ) provides further insight about the spatial distortion between the two halves of the bar in the coronal and horizontal planes. The projections were calculated using the following formulas (X, Y, and Z are the vector parameters in the x-, y-, and z-axes): Statistical analysis For statistical analysis, SPSS Version 25 (SPSS Inc., Chicago, USA) was used. Kolmogorov-Smirnov and Shapiro-Wilk tests were applied to assess the null hypothesis, followed by Kruskal-Wallis H test. Trueness was evaluated using a post hoc Mann-Whitney U test and precision was assessed with a two-sample Kolmogorov-Smirnov test. A Bonferroni correction was applied. The level of significance was set at p = 0.008 for the model situation and at p = 0.05 for the IOS. A post hoc power analysis by two-tailed Wilcoxon-Mann-Whitney test was conducted using G*Power software package (version 3.1.9.7). The sample size of 30 and alpha level of 0.05 was applied. Results Descriptive statistics, including median values, minimum, maximum, and 95% confidence interval for each parameter, are given in Table 1. The Kolmogorov-Smirnov test revealed 14 out of the 56 parameters to be not normally distributed. Figures 8 and 9 show the boxplots of all tested parameters. The post hoc power analysis detected a power of 79 to 100% for the comparison between model situations. Regarding the comparison between intraoral scanners, power of 84 to 100% was revealed for linear parameters that demonstrated significant differences. For angular comparisons, a power of 96 to 100% was shown. Discussion The present in vitro study attempts to compare the trueness of two different IOS systems, namely, Cerec Primescan AC (PRI) and Cerec AC Omnicam (OMN) with four different partially edentulous situations. For that purpose, linear deviations in the x-, y-, and z-axes as well as angle measurements in the coronal and horizontal directions were examined. The investigated IOS hardware and software components used in this clinical study are currently available in the market. In the present study, PRI presented higher trueness and precision than OMN in most of the measured parameters for every tested model. Accordingly, the first null hypothesis predicting no significant differences between the two IOS devices must be rejected. Regardless of model situation, larger discrepancies were revealed for OMN in the vertical dimension and horizontally across the arch, while PRI produced higher deviations horizontally in the anterior-posterior direction along the y-axis. Since parameters like software version, ambient light conditions, scanning strategy, calibration, and operator were maintained constant throughout the scanning procedure, the dissimilar performance of the IOS systems regarding trueness, precision, and distortion pattern can be attributed to the different hardware components or measuring principle. A significant design variation between the two devices consists in the size of the scanning head. The larger scanning unit with a bigger field of view facilitates the capture of larger parts of the arch at once, generating single images of a greater area, which can be more precisely overlapped by the software algorithm, thusly alleviating inaccuracies generated during the stitching process. A prior study reported improved accuracy on partially edentulous arches when a larger scanning head was used, while Kim et al. attributed the inability of OMN in digitizing a partially edentulous arch to the smaller scanning head of the device [25,26]. On the other hand, a bulky handpiece might limit maneuverability in the oral cavity. Additionally, scanning of the steep interproximal surfaces of the teeth may produce distorted images and therefore generate a greater error due to improper image overlap. The occurrence of higher discrepancies in the unilateral edentulous situation with PRI could be accounted for by the substantially sized scanning head which hindered access to the gap between teeth 47 and 43. Secondly, the two systems utilize a different optical capturing technology, which is regarded to be crucial in determining the depth of image stitching and could therefore account for the difference in the vertical dimension displayed by the two IOS [8,9,27,28]. OMN relies on active triangulation with a strip light projection, where accuracy error is determined by the angle of light reflection and is dependent on the distance between camera and object [28][29][30]. Consequently, abrupt changes of scanning depth, for instance in edentulous areas, may negatively influence the accuracy of digitization and account for the larger discrepancies in the vertical dimension demonstrated by OMN. Moreover, the lower trueness and precision in model situation 45,44,34,35 illustrates the cumulative distortion caused by edentulous areas on each side of the arch. PRI works on the basis of shortwave light with optical high-frequency contrast analysis for dynamic depth scan and high-resolution sensors and seems to be less affected by changes in the focus level. However, due to the different working principles, the ambient scanning light conditions might have a different influence on the accuracy results of IOS. For the Cerec AC Omnicam, the ambient light intensity significantly influenced the trueness and precision of the virtual model dataset after direct digitalization [31]. However, for the Cerec Primescan AC, no literature information was available, so this should be a topic of further investigation. The second null hypothesis, stating that no differences in accuracy should arise between the different model situations, also must be rejected as the fully dentate model situation exhibited significantly higher trueness and precision in most tested parameters regardless of IOS. Figure 10 depicts the different patterns produced by each scanner for every model. With PRI the anterior edentulous situation displayed significantly lower trueness in �� ⃗ V E (z) and α coronal to the fully dentate situation signifying that the lack of anterior teeth produces higher vertical deformations. In addition, significant differences between the fully dentate and the bilateral partially edentulous model in �� ⃗ V E , α horizontal , and �� ⃗ V E (y) indicate that edentulous areas in both halves of the dental arch increase model warpage horizontally in the anterior-posterior direction along the y-axis. Model situation missing 46-44 differed significantly in almost every parameter to the fully dentate situation, demonstrating that a larger edentulous area located closer to the origin of the scan path tends to generate overall higher error and lower predictability. By contrast, digitization with OMN seems to produce a different pattern of distortion. Model situation missing 42-32 performed similarly to the fully dentate situation. Several authors maintain that the steep and narrow surfaces of the anterior teeth amplify the error generated by image overlap, thusly increasing the overall inaccuracy of the dataset [7,[32][33][34]. Considering the current results, however, this theorized effect of the anterior teeth appears to have been overestimated. Moreover, model situation missing teeth 46-44 showed significant divergence to the fully dentate model in the �� ⃗ V E (x) and α horizontal , possibly because of transversal expansion along the x-axis. The bilateral edentulous situation exhibited significantly higher aberrations to the fully dentate model and lowest predictability in every direction, signifying that edentulous spaces in both hemiarches result in lower overall accuracy. The results were reported for trueness and precision according to ISO 5725 [35]. For better comparison of the results with the current literature, an established methodology was used [4,12]. Also, the use of reference data acquired by highly accurate industrial digitalization units is the gold standard for reporting trueness. In most of the scientific literature, datasets of digital models are quantitatively analyzed after superimposition with a reference dataset employing a best-fit algorithm, while deviation patterns are usually calculated on basis of 3-dimensional difference [33,34,36,37]. The calculated results are often given as positive and negative deviations from the reference dataset and graphically depicted in color-coded maps. However, this approach has been subject to criticism, primarily due to errors inherent with data processing of full arch digital models [4,5,18,27]. Moreover, as best-fit alignment attempts to minimize the overall differences between datasets, real errors may be inevitably obscured [3,6,7]. Therefore, a qualitative comparison of the results with the available literature is challenging. Linear deviations produced by PRI in the x-, y-, and z-axes are within the range of values reported in a previous in vivo examination [18]. By contrast, a study by Kim et al. revealed in vitro higher linear discrepancies for OMN in the y-and z-axes, albeit on a partially edentulous mandible with scan bodies in the area of the missing teeth [10]. A recent in vitro study found significantly higher trueness for PRI than for OMN in the x-and y-axes, yet no significant differences could be found in the z-axis [9]. Kuhr et al. reported higher vertical angular deviations for OMN on fully dentate mandible in vivo [7]. Furthermore, PRI has been repeatedly shown to perform better than OMN, although it should be noted that the aforementioned comparisons are conducted with dataset superimposition after best-fit alignment [32,34,37]. Regarding warpage of virtual models, previous investigations based on color-coded maps have reported anterior contraction and posterior expansion of datasets generated with OMN [32,36]. The findings of the current research suggest that the presence of edentulous areas in the dental arch, especially in place of posterior teeth, negatively affects the trueness of the generated digital model. Due to the absence of distinctive anatomical structures and the presence of non-attached mucosa or saliva, the digitization of edentulous spaces is rendered particularly challenging [19,26,38]. The inaccuracies and distortions in the digital model affect the subsequent computer-assisted design and manufacturing procedure and eventually influence the accuracy of the prosthetic restauration. Discrepancies in the transverse and horizontal planes (x-and y-axes) could translate in a misfit of the final restauration, while vertical divergence (z-axis) illustrates the torsion between the two hemiarches and might eventually result in occlusal incongruity. Prior investigations analyzed the influence of varying edentulous anatomies on the accuracy of resulting virtual model dataset by best-fit superimposition [25,39,40] and corroborate the current results. However, to the author's knowledge, no previous in vitro study has investigated the effect of varying partially edentulous anatomies on the accuracy of full-arch digital impressions using a reproducible reference structure for all investigated model situations. Though by superimposition of datasets, no quantifiable information on the generated pattern of distortion can be provided, a more detailed surface analysis can be achieved. In general, with the current setup, only the accuracy of the initial step of the workflow, namely, the digital impression, can be investigated; hence, errors bearing on the manufacturing process cannot be assessed. To analyze the accuracy of the complete workflow, including the manufacturing of the dental restorations, the final fit of the dental restoration should be investigated [41]. Advantageous of the current analyzing methodology is the applicability of the comparison between different digital impression systems [4,5], model morphologieseven dysgnathic situations [42] -and potentially in an in vivo setting [4]. However, like every scientific work, the present work is subject to several limitations. Scanning performance was evaluated only for the lower jaw, as digitization of the upper jaws has been theorized to result in higher accuracy due to the additional image overlap over the palatal area [7]. Further on, the present study was conducted in a laboratory setting where the effect of multiple factors such as patient movement, patient compliance, spatial restrictions, and the presence of saliva or blood, which may influence the results of in in vivo experiment, cannot be reproduced [33]. However, a recent investigation by Keul et al. revealed a comparable pattern of distortion for in vivo and in vitro scans [4]. In addition, the polyurethane models exhibit different optical properties to intraoral structures (enamel, dentin, mucosa), therefore IOS may perform differently when scanning intraorally [43]. Furthermore, only one experienced operator was included, and the data acquisition was performed not in a clinic-simulating situation using a phantom head. Therefore, the data capturing mode of both IOS systems were switched to extraoral digitalization. Lastly, future scientific research is necessary to address the effect of different scanning strategies on partially edentulous situations as well as to analyze the accuracy of IOS on a greater variety of partially edentulous jaws. The results of the current investigation suggest that Cerec Primescan AC and Cerec AC Omnicam are applicable in digital prosthetic planning, complex implant planning for fixed prosthodontics on edentulous jaws, and even digital planning of complex orthognathic procedures [42,44]. Moreover, both systems provide sufficient accuracy for the manufacturing of tooth-borne restorations and orthodontic appliances, as the measured error falls within the range of tooth mobility [45]. However, for full-arch fixed implant restorations, where passive fit is required [46], the use of intraoral scanning systems should be considered with reservation, even though the use of IOS for the manufacture of fixed implant prosthesis based on the "All-on-4" concept with implants placed in the area between the second premolars has been documented [44]. Conclusions Within the limitations of the present in vitro study, the following conclusions can be drawn:
2022-04-24T05:09:31.463Z
2022-01-04T00:00:00.000
{ "year": 2022, "sha1": "4ca3026bac1635dd548fce115ae665fbf2db4ef0", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00784-021-04335-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4ca3026bac1635dd548fce115ae665fbf2db4ef0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
257813739
pes2o/s2orc
v3-fos-license
Technologies for Asphalt Pavement Surface Testing in Road and Bridge Construction : Asphalt pavement is currently one of the main components in the construction of roads and bridges. However, from a practical point of view, various quality problems are prone to occur in the surface layer of asphalt pavement, which will lead to the poor overall quality of road and bridge projects. Therefore, it should be applied reasonably. Advanced testing technologies are used to test the mixture quality, compaction, segregation, thickness, and other aspects of the asphalt pavement surface layer, so as to improve the quality of the asphalt pavement surface layer, and then improve the overall quality of road and bridge construction. Therefore, this paper mainly analyzes the technologies for asphalt pavement surface layer testing in road and bridge engineering construction Introduction In recent years, China has undergone continuous urbanization, the transportation network has been improving, and the construction of roads and bridges has received more and more attention. One of the most frequently used road surfaces in contemporary China is asphalt pavement. The durability of asphalt pavement is not only the result of the rapid improvement of road and bridge construction in China, but also a reflection of the construction technical level and working attitude of construction workers. Therefore, in the process of asphalt pavement construction, it is important to carry out a detailed inspection of the pavement base to ensure its good quality [1] . Moreover, in the actual construction process, there may be situations such as uneven distribution of surface materials, insufficient tightness of pavement joints, and unqualified raw material mixture, which may affect the safety of road operation. Therefore, it is necessary to carry out reasonable quality checking for asphalt pavement surface layer. Hence, it is of great significance to analyze and study the technologies for of asphalt pavement surface layer testing in road and bridge construction. Construction problems of asphalt pavement surface During the construction of road and bridge projects, the construction problems of the asphalt pavement surface that may occur are cracks, upper layer problems, and raw material quality problems. These occur quite frequently, and any one of them will affect the overall quality of the road and bridge. Therefore, it is necessary to conduct in-depth exploration on the points regarding problem detection to ensure that problems can be detected quickly in the quality checking process. (1) Crack detection and repair Cracks are a common problem in the construction of asphalt pavement of roads and bridges. Cracks can easily affect the overall strength of the pavement and are not conducive to prolonging the service life of the road. Therefore, it is very important to effectively detect and repair cracks in the asphalt pavement surface in time. At the same time, the problem of cracks in the pavement surface can also extend to the expansion and contraction of the pavement surface. Although the possibility of debris falling is small, the waterproof performance of road bridges will inevitably be affected, and it can form a pavement, and the safety of road bridges will then be affected, so it is necessary to efficiently detect and repair cracks in the asphalt pavement surface of road bridges [2] . Quality problems in the raw material will significantly affect asphalt pavement surfacing. Generally speaking, raw material quality problems are inconsistent raw material quality. For example, the content of dust particles in aggregates exceeds the standard or the size of the materials is too large, which can affect the overall quality of the raw material mixture. Moreover, due to the different sources of aggregates, the gradation of aggregates may be inconsistent, which will affect the construction effect of the asphalt pavement surface. Asphalt pavement surface detection technology in road and bridge construction A comprehensive inspection of the asphalt pavement surface layer is an important basis for improving the overall quality and safety, and prolonging the service life of road and bridge projects. Therefore, it is very important to analyze related detection technologies. (1) Mixture analysis Mixture analysis includes testing mainly the strength and toughness of the asphalt pavement surface layer mixture. Therefore, it is necessary to reasonably control the proportion of materials of the asphalt mixture while ensuring that the construction materials are most consistent with the construction requirements of the pavement surface layer. When mixing asphalt mixture, technicians should pay attention to the precise control of the time and temperature. If the mixing time is too long, too short, or the temperature is too high, asphalt aging or uneven mixing may occur. Therefore, it is necessary to properly control the quality of raw materials, proportioning, mixing time, and mixing temperature of the asphalt mixture. Afterwards, the rutting test should be carried out. Generally, the mixing temperature should be set at 60°C and a suitable load should be selected run over the test track repeatedly, and the deformation of the test piece should be calculated. The dynamic stability of asphalt mixture is determined based on the number of wheel travel. In addition, an appropriate amount of asphalt mixture can also be placed in water for freezing, and through the erosion of water, the asphalt mixture's antiloosening, anti-dropping, anti-stripping and other anti-destructive capabilities can be determined, so as to understand its water stability [3] . (2) Compaction analysis When performing asphalt pavement surface construction work, the asphalt material is not just pasted on the pavement, but it needs to be rolled several times. There are many precautions in the rolling process. The degree of compaction of the asphalt pavement surface layer is closely related to the smoothness of the pavement. A smooth road surface will ensure good anti-skid effect and load capacity, which is conducive to improving the safety of the asphalt pavement. Over-compaction will cause the asphalt mixture to become too dense, and the bleeding will easily occur in a high-temperature environment, which will lead to a decrease in the static friction coefficient of the road surface, which may cause accidents such as skidding, and affect driving safety. Therefore, the degree of compaction of the asphalt pavement surface must be controlled within a reasonable range, and it is of great significance to detect the degree of compaction of the asphalt pavement surface. In the past, the main method for testing the compactness of asphalt pavement was the Marshall compactness test, but with the development of modern technology, more convenient methods emerged. For example, in the wax seal method, the sample is first molded based on the Marshall method, then the weight of the specimen in air is measured. Next, the open pores of the specimen is filled with melted wax, and the specimen is immersed into the melted wax for repeated rolling to seal all the pores around it until the melted wax condenses. The waxsealed test piece is pressed into the mold, the excess wax is scraped off using the edge of an abrasive tool, a scraper is used to repair the two sides of the test piece so that the volume of the test piece is the same as that of the test piece that has not been sealed with wax. After that, the weights of the wax-sealed specimen in air and in water are measured, and the compactness of the asphalt pavement is calculated [4] . (3) Resolution analysis The segregation of the asphalt pavement surface layer is mainly caused by the uneven distribution of the mixture, and it causes safety hazards, especially in cases of hot weather, bad weather, overloading, and many more, which severely shortens the service life of the pavement, causing further aggravation of the hidden dangers. The determination of the particle size of the asphalt pavement surface layer can be done by eye, but this method is only suitable for large particles and coarse mixtures, which makes it highly subjective and limited, and cannot be quantified. Therefore, it is easy to cause disputes between parties [5] . The degree of segregation of the surface layer can also be determined by the sand-spreading method. After the sand-spreading operation, the surface texture depth of the area where segregation occurs and the area where segregation occurs can show significant differences, but this method is timeconsuming and laborious, making it less popular. The coring method is a traditional form of destructive test. Core samples are drilled during isolation, and the gradation composition, asphalt content, density, and void ratio of the core samples are measured and compared to the standard values. The degree of segregation of the surface layer is then measured [6] . More advanced methods include infrared cameras, nuclear density meters, and ground-penetrating radar detection. The detection of segregation by temperature difference in the layers is conducive to early detection and intervention, so as to ensure the construction quality of the asphalt pavement surface layer. This method belongs to a type of segregation phenomenon detection technology with high application frequency and good application effect [7] . (4) Thickness testing The thickness of the asphalt pavement surface layer is crucial because it determines the overall compressive capacity of the pavement. Under normal circumstances, road surface radar detection technology can be used. It is necessary for inspectors to first use ground-penetrating radar to emit electromagnetic pulses against the road surface layer. The pulses quickly pass through the road surface layer and the data acquisition system records the return time of the pulse and the sudden change of the discontinuous dielectric constant in the pavement structure. Because the material of each structural layer in the pavement surface layer has a dielectric constant, if there is a sudden change in the dielectric constant, that position would be the interface between different structural layers [8] . Therefore, the pavement structural layer thickness can be calculated by detecting the actual dielectric constant and beam of different pavement materials obtained. It should be noted that the detection speed should be kept below 75 km/h, the continuous detection range should be within 20 km, the detection depth should exceed 60 cm, and the detection process should be controlled by a computer to ensure that data collection, storage, and radar waveform display can be carried out simultaneously. After processing the data, the three-dimensional pavement thickness profile, color plan and thickness table of the pavement will be displayed on a computer. (5) Flatness testing The flatness of the asphalt pavement surface has a direct and important impact on the safety and comfort of driving. Unevenness will cause more significant deformation and even lead to road collapses, which is a great safety hazard. Therefore, it is very important to check the flatness of the asphalt pavement surface. The current laser flatness meter and the please change to vehicular bump-integrator belong to the instruments with high frequency of application in flatness detection [9] . The laser level meter is combination of a laser sensor and a distance sensor. It can carry out long-distance rapid automatic detection of the road surface under normal vehicle speed conditions. A computer can also be paired with the device for on-site data analysis and evaluation. It has high detection speed and accuracy. Its detection speed can reach up to 80 km/h, so it can conduct comprehensive inspections on urban roads, airport runway surfaces, and expressways. At the same time, because it is completely automated, the accuracy of the detection results can be guaranteed. The please change to vehicular bump-integrator can quickly detect the smoothness of the road surface, and it is easy to operate and cheap. The sensor is installed on the road, so the driving speed and vibration characteristics of the vehicle can affect the detection results to a certain extent [10] . Conclusion At present, the socioeconomic status of China is rapidly improving, and the road and bridge projects are rapidly developing. The quality of mixture, segregation, compaction, thickness, and flatness can be effectively guaranteed, which can significantly improve the overall quality of the asphalt pavement surface layer, as well as the later use effect and safety. Therefore, it is clear that advanced asphalt pavement testing technology greatly contribute to the development of roads and bridges of the country. Disclosure statement The author declares no conflict of interest.
2023-03-30T15:31:22.691Z
2023-03-28T00:00:00.000
{ "year": 2023, "sha1": "7f4e2b22e16c3a8fd9a7e7099edaa2c16fb3c74f", "oa_license": null, "oa_url": "http://ojs.bbwpublisher.com/index.php/JARD/article/download/4724/4161", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e6cef085e3a4a674e232afe6f8505f1febde524e", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
78323042
pes2o/s2orc
v3-fos-license
Low Dose Mesalazine Plus Bismuth Regimen and Symptoms of Irritable Bowel Syndrome in Patients with Bloating: A Quasi-Experimental Study Background: The current study aimed to evaluate the efficacy of mesalazine plus bismuth on patients with irritable bowel syndrome (IBS) and chief complaint of bloating. Methods: The current quasi-experimental study, included patients with IBS and chief complaint of bloating and incomplete defecation. They were treated with masalazine and bismuth subcitrate and followed regularly based on monthly visits. The rate of symptoms relief, patients’ satisfaction and any side effects were recorded during the surveillance. Results: Overall, 42 patients (33 females and 9 males) were included. The mean age of the patients was 35.9 years (ranged 22 - 67 years); 32%, 44% and 24% had high, medium and low socioeconomic levels, respectively; 96% of the patients were nonsmokers and just two patients had a history of alcohol consumption. Two patients had glucose intolerance, four had hypothyroidism and four hadpasthistoryof valvularheartdisease. In20% of thepatients,thefamilyhistoryforintestinalbowldisease(IBD)waspositive. Ten patients had a history of bloody diarrhea and no one had a history of any significant liver diseases. The most common symptoms of patients included incomplete defecation and tenesmus (41 patients, 97.6%), bloating (39 patients, 92.8%), abdominal fullness (35 patients, 83.3%) and mucus discharge (30 patients, 71.4%). After an average six months of treatment (3 - 11 months), 69.1% of patients reported improvement of symptoms more than 50% (38.1%, ranged 75% - 100%), and 31% (ranged 50% - 75%) indicated overall symptoms relief). The most significant improvement was reported for bloating (85%). There were no major side effects except minor degrees of diarrhea among 26% of the subjects. Conclusions: The results of the study were indicative of improvement and symptom relief in the majority of patients and it seems that treatment prolongation up to six months could be a key factor to achieve better clinical responses. It is recommended that further randomized clinical trials evaluate this therapeutic regimen. Background Irritable bowel syndrome (IBS) is a functional disorder of intestine with major symptoms of chronic abdominal pain and change in bowel habit (1). IBS is one of the most common gastrointestinal (GI) disorders and it is estimated to have a prevalence of about 20% among the developed countries (1,2). There is no specific laboratory or imaging criteria for its diagnosis and IBS is diagnosed mainly based on Rome criteria and exclusion of other organic disorders (2,3). IBS could be divided into subgroups based on dominant symptoms (2)(3)(4) and bloating is one of its major symp-toms among different subgroups (5). There are many theories about possible pathophysiology of this condition including GI motility disorders (6), central nervous system dysfunction (7), abnormal psychological conditions (8), post infection IBS (9), disturbance of bowel microbiota (10), serotonin pathway disturbance, immune system dysfunction and or mucosal inflammation (11)(12)(13)(14)(15). Recent studies have emphasized on the importance of mucosal inflammation and activation of immune system. Based on these theories, symptoms of IBS are resulting from interaction between environmental factors and factors related to genetically susceptible host (10). Induc-tion of mucosal inflammation (caused by infections or any other unknown reason) results in increased permeability of mucosal barrier along small bowel and colon and subsequently activated secretory reflex and stimulated sensory roots among intestine. This group of patients present dominant symptoms of diarrhea (stimulation of secretory reflexes), abdominal pain (triggering of sensory roots) and bloating; induction of irritable bowel syndrome after infections, presence of mast cells and lymphocytes in intestinal mucosa, elevated level of inflammatory cytokines and clinical response to non-absorbable antibiotics are all signs of mucosal inflammation dominancy among these patients. These findings introduced new horizons in discovering novel and more specific therapeutic approaches to treat such patients (16). Based on the obtained data, some studies investigated the effects of mesalazine as a well-known drug to treat inflammatory bowel disease that markedly reduces mucosal immune cells, especially mast cells, and significantly improves general well-being, and bismuth subcitrate with anti-diarrheal, anti-inflammatory and antibiotic properties in treatment of such patients (10, 14-17). Objectives The current study aimed to evaluate the efficacy of mesalazine and bismuth subcitrate on patients with symptoms of bloating and diarrheal dominant IBS unresponsive to routine therapeutic approaches. Methods The study was performed from summer 2013 up to spring 2014 on 40 consecutive patients with IBS referred to outpatient GI clinic of Ahvaz Imam hospital, Iran, as a referral center. Inclusion criteria included confirmation of IBS diagnosis by two gastroenterologists based on Rome III criteria, presence of bloating as dominant complaint of the patient and normal total colonoscopy with random biopsies. Exclusion criteria included any history of celiac disease, absence of improvement by routine therapeutic approaches for at least one year, pregnancy, breast feeding, history of abdominal surgery, opiate abuse, non-steroidal anti-inflammatory drug (NSAID) usage, long term antibiotic receiving, renal failure, sever chronic liver disease, sensitivity to salicylates and history of any morbid disorders. The socio-demographic level of participants was determined based on their monthly income and level of education. Before inclusion in the study, all of the participants were requested to sign an informed consent and also report their clinical manager any problems or side effects during the study period. They were also free to leave the study at any time based on their own desire. The study protocol was approved by Ahvaz Jundishapur ethics committee (ajums.REC.1393.39) and it was also registered in Iranian registry of clinical trials (IRCT2014080814190N4). All of the study subjects were treated by mesalazine (2 g/day) and bismuth subcitrate (120 mg BID) for six months. All of the patients were evaluated for symptoms such as severity and duration of abdominal pain, number of bowel habits per day and improvement of bloating by filling a questionnaire in the beginning, third month and end of the study. They were given their medication based on monthly number of tablets for each month and were also observed not to miss any dosing of drugs. After collection, the data were analyzed by SPSS software ver. 15. Descriptive statistical method was used to measure average and standard deviation and normal variations determined by Kolmogorov-Smirnov test. The predictive value < 0.05 was considered meaningful. Results Overall, the study included 42 patients and 33 of them (78.6%) were female. The mean age of patients was 35.9 years (ranged 22 -67 years) and the mean weight was 66.3 kg (ranged 42 -100); 40% (2 males, 18 females) were single and from the viewpoint of socio-economical level, 32%, 44% and 24% had high, medium and poor monthly income, respectively. The mean number of patients' family members in their living place was 5 (ranged 2 -9) persons; 96% of the subjects were nonsmokers and just two had a history of occasional alcohol consumption (Table 1). A couple of patients had glucose intolerance (FBS > 100), four had hypothyroidism (under treatment and euthyroid during study period) and four had past history of valvular heart disease such as mitral valve prolapse (MVP). In 20% of the patients, the family history for IBD was positive. Ten patients had history of bloody diarrhea and no one had the history of any significant liver diseases (except mild fatty liver disease among two patients). 21.4%) and fever (3 patients, 7%). After an average of six months of treatment with mesalazine one gram plus bismuth subcitrate 120 mg BID (3 -11 months), 69.1% of patients reported improvement in symptoms more than 50% (16 patients, 38.1%), ranged 75% -100%, and 13 patients (31%), ranged 50% -75%, indicated overall symptoms relief. The most significant improvement was reported for bloating (85%). There were no major side effects and only 26% of patients complained of minor degrees of diarrhea. The rate of symptom improvement had no significant difference between males and females (P = 0.526). Moreover, the rate of > 50% improvement was unrelated to complaint or history of diarrhea (P = 0.33) and/or constipation (P = 0.526). The best therapeutic results were observed with treatment period of ≥ 6 months (Figure 1). Discussion Irritable bowel syndrome (IBS) is a common disease in most of the communities. In such settings, the patients complain of not only abdominal pain but also other gastrointestinal symptoms including bloating and eructation even though their presence is not mandatory for diagnosis (1)(2)(3)(4). The current study evaluated the therapeutic efficacy of mesalazine and bismuth subcitrate regimen to treat patients with IBS and dominant complaint of bloating and incomplete defecation according to obvious effects of bloating on declining the quality of life of such patients and recent findings about the role of mucosal inflammation and immune system activation in pathogenesis of this condition (10)(11)(12)(13)(14)(15). The definite pathogenesis of this condition has not elucidated completely but disturbance of gut microbiota is considered among recent studies (12). Moreover, some studies showed that short course of antibiotic therapy resulted in significant improvement in GI symptoms of such patients (6). In a double blind clinical trial, neomycin resulted in 25% improvement among the results of lactulose breath test in comparison with that of placebo, and this drug was more effective in relieving clinical symptoms, although its usage was abandoned due to systemic absorption and side effects (18)(19)(20). Accordingly, in another study rifaximin, a non-absorbable antibiotic was more efficacious than placebo to treat bloating; although only 60% of participants in this study fulfilled the Rome II criteria to diagnose IBS and no one suffered through lactulose breath test (13). In the current study, prescribing bismuth subcitrate 120 mg plus mesalazine one gram BID resulted in significant improvement of bloating dominant IBS patients and this improvement remained up to two weeks after discontinuation of the drug. Some studies reported that this efficacy was more permanent; therefore, it seems that these effects could not be related to the therapeutic efficacy of bismuth subcitrate. One study evaluated efficacy of a three-week course of treatment with bismuth subcitrate and achieved a significant improvement among abdominal pain, changing bowel habit, diarrhea and also positive histologic changes in intestinal biopsy (15). On the other hand, some studies considered mesalazine efficacious in decreasing mucosal lymphocytes and mast cells and general wellbeing among patients with IBS, even though not effective in relieving abdominal pain and bloating (14,16,17). Based on the study findings, it was concluded that a combination of bismuth subcitrate as a non-absorbable antibiotic with mesalazine as an anti-inflammatory drug can be useful to treat bloating dominant in patients and as mentioned previously, no study evaluated this combination formula. Although the study was performed as a case series, since almost all of the subjects were chronically affected and treated by multiple different therapeutic regimens (including placebo and reassurance), authors could compare the current results of the participants with those of themselves as a control group; and may be in future the group can be categorized as a distinct group located in the boundary between IBS and IBD as a spectrum (21). It seems that the patients with chief complaint of bloating and incomplete defecation who did not match completely with the criteria of IBS or IBD could be nominated as irritable colitis and they respond very well to a limited course of masalazine plus bismuth. Conclusion The results of the study were indicative of improvement and symptom relief in the majority of patients and it seems that treatment prolongation up to six months could be a key factor to achieve better clinical responses. It is recommend to evaluate this therapeutic regimen in further randomized clinical trials.
2019-03-16T13:06:05.282Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "fb9d9c8320499062fe828d782ea0388c88df43c4", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.17795/jjcdc-35043", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "d506b94307fdbdd9859f6fd3fb011587d1f15777", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235663679
pes2o/s2orc
v3-fos-license
Message Design Choices Don't Make Much Difference to Persuasiveness and Can't Be Counted On—Not Even When Moderating Conditions Are Specified Persuaders face many message design choices: narrative or non-narrative format, gain-framed or loss-framed appeals, one-sided or two-sided messages, and so on. But a review of 1,149 studies of 30 such message variations reveals that, although there are statistically significant differences in persuasiveness between message forms, it doesn't make much difference to persuasiveness which option is chosen (as evidenced by small mean effect sizes, that is, small differences in persuasiveness: median mean rs of about 0.10); moreover, choosing the on-average-more-effective option does not consistently confer a persuasive advantage (as evidenced by 95% prediction intervals that include both positive and negative values). Strikingly, these results obtain even when multiple moderating conditions are specified. Implications for persuasive message research and practice are discussed. Designing effective persuasive messages is an important communicative task. Encouraging people to exercise regularly, vote for a candidate, buy a product, reduce home energy consumption, donate to a charity, be screened for a disease, alter their diet, give blood, wear seat belts-all these purposes can potentially be advanced through persuasive messages. Considerable experimental research has explored the effects of message design choices that persuaders might face, such as using gain-framed or loss-framed messages, narrative or non-narrative formats, strong or weak threat appeals, and so on. In an individual experiment, the size and direction of the difference in persuasiveness between the two message forms being compared can be represented by an effect size index such as d (standardized mean difference) or r (correlation). A large effect size indicates a large difference in the persuasiveness of the two message forms being compared; a small effect size indicates little difference in persuasiveness. For many message design choices (message variables), meta-analytic reviews have provided a good evidentiary basis for conclusions about how such choices affect relative message persuasiveness. Two properties of these effects are naturally of interest to message designers. One is the size of the effect, that is, the mean effect size: How large a persuasive advantage is conferred on average by choosing the message form that is generally more effective? The other is the consistency of the effect across studies: How much does the persuasive advantage vary from one application to another? From a message designer's point of view, of course, the ideal would be to know of large, consistent effects-message design choices that would dependably produce large differences in relative persuasiveness. The broad purpose of this paper is to describe the size and consistency of the effects of persuasive message design choices, using evidence from extant meta-analyses. Previous review discussions of such meta-analyses have focused on the average size of the main (simple, non-contingent) effects of design choices (message variations). The present review aims to deepen such analyses in two ways. First, it considers not only the main effects of design choices but also contingent effects, that is, effects observed under moderating conditions. Second, it considers not only the size but also the consistency of both main and contingent message-variation effects. The next section elaborates these purposes and provides a more careful specification of the approach taken. BACKGROUND Previous Reviews Summaries of the relevant meta-analytic evidence have consistently concluded that the mean effect sizes associated with persuasive message design choices are rather small. For example, over 20 years ago, Dillard (1998) reviewed nine such meta-analyses, reporting that the mean effect size (expressed as r) was only 0.18. Weber and Popova (2012) extracted 206 effect sizes from "persuasion effects" meta-analyses; the median r was 0.11 (similarly, see Rains et al., 2018). Such previous summaries are useful but are limited in two ways. First, these summaries have focused on the main (simple, non-contingent) effects of persuasive message design choices. The finding that such effects are not large is perhaps unsurprising. A common expectation seems to be that the effects of message design choices are inevitably contingent on various moderating factors. Crudely put, there might be "interaction effects all the way down" (Vivalt, 2015, p. 468). As Rains et al. (2018, p. 121) pointed out, their focus on main effects might make for a misleading picture because larger effects could occur under specific moderating conditions. So even if on average a given design choice enhances persuasion only a little, more substantial increases might be observed under the right set of moderating conditions. Second, previous summaries are limited by having focused on the size of the effects, with little attention given to effect variability (heterogeneity). Attending to heterogeneity can be important for various purposes, such as serving as a goad to theoretical progress. For example, Linden and Hönekopp (2021) have argued that in psychological research, unexplained heterogeneity is a sign that the phenomenon is poorly understood, and hence reduction of such heterogeneity is a marker of research progress (see also Kenny and Judd, 2019). Where these purposes are central, an interest in moderator variables will lead to questions about whether moderators can explain variability. In the present project, however, our primary interest in the heterogeneity of message-variation effect sizes has a different purpose, namely, to consider the implications of such variability for message design practices. The consistency of effect sizes associated with a given message variation is especially relevant to persuasive message design recommendations. Imagine, for example, two message variations with identical mean effect sizes-but for one message variation the observed effect sizes all cluster closely around that mean (little variability from study to study in the size of the persuasive advantage conferred) whereas for the other message variation the observed effect sizes are quite variable. One would have rather more confidence in the expected effects of the first design choice than in those of the second. As another illustration: Even if a given message design variation produces only a small mean effect size (only a small average persuasive advantage from choosing the more effective message form), that small advantage may be especially valuable if it can be obtained consistently. And, plainly, considerations of effect-size variability are relevant to both main effects and contingent effects. It is important to understand how consistently a given message design choice yields a persuasive advantage, whether in general or when moderating conditions are specified. Briefly, then: Where previous summaries of message-variation meta-analyses have focused on the size of main (non-contingent) effects, the present report examines both the size and the variability of both main effects and contingent effects, that is, effects observed under specified moderating conditions. Comparisons between main effects and contingent effects are of special interest. It seems natural to expect that as moderating conditions are taken into account, the size of the persuasive advantage enjoyed by the more effective message form can increase (i.e., under appropriate moderating conditions, effect sizes will increase) and the persuasive advantage can be obtained more consistently (i.e., under appropriate moderating conditions, effect sizes will become more consistent), compared to what is observed when examining only main effects. Describing the size of message-variation effects is familiar and straightforward: one examines the mean effect size, that is, the average difference in persuasiveness between the two message forms. However, effect-size variability has not yet been the focus of much explicit attention, so the next section provides some background. Describing Effect-Size Variability For describing the variability of effect sizes, one might initially think of various meta-analytic heterogeneity indices such as I 2 , Q, H, Birge's R, and the like (for some discussions of such indices, see Birge, 1932;Higgins and Thompson, 2002;Higgins et al., 2003;Huedo-Medina et al., 2006;Card, 2012, p. 184-191). But these heterogeneity indices do not capture the desired property. Broadly speaking, these indices express observed heterogeneity as a proportion of, or a ratio involving, the heterogeneity that might be expected from human sampling variation alone. For example, Birge's R is the ratio of observed to expected variation; I 2 describes the percentage of observed variation that is not attributable to chance (Higgins et al., 2003, p. 558). The implication is that such indices do not provide the desired information because the indices do not describe how effect sizes vary in absolute terms. This point is easily misunderstood, which can lead to erroneous interpretations of values of indices such as I 2 (e.g., thinking that if I 2 is large then the effect sizes vary considerably from study to study). In fact, as Borenstein et al. (2017) have explained, because I 2 is a proportion rather than an absolute value, it cannot indicate how much effect sizes vary (see also Rücker et al., 2008). However, the desired property can be described using prediction intervals. Expressed in terms of the context of present interest, a prediction interval specifies the plausible range of effect sizes to be observed in the next application (e.g., the next study). Thus, "a prediction interval provides a description of the plausible range of effects if the treatment is applied in a new study or a new population similar to those included in the metaanalysis" (Partlett and Riley, 2017, p. 302). "A 95% prediction interval estimates where the true effects are to be expected for 95% of similar (exchangeable) studies that might be conducted in the future" (IntHout et al., 2016, p. 2). Prediction intervals (PIs) are not to be confused with confidence intervals (CIs), as these address different questions (see Borenstein, 2019, p. 94-96). The 95% CI around a metaanalytic mean effect size gives the range of plausible population (mean) values; the 95% PI gives the range of plausible future individual effect sizes. The relationship of PIs and CIs is sometimes misunderstood because of a misapprehension about CIs-specifically, a belief that the CI describes the dispersion of effects in individual studies. It does not. The CI describes the range within which the mean effect (the population effect) is likely to be found. But answering the question "where is the mean effect likely to be?" is different from answering the question "where is the effect size in an individual study likely to be?" Thus, PIs are not a replacement for, or an alternative to, or a competitor with, CIs. Rather, PIs provide information that CIs do not and hence are useful adjuncts to CIs. Specifically, the PI gives the range of plausible future individual effect sizes, that is, the range of plausible results in individual studies. If one wants to know where an effect size might fall in a future application, the PI is informative but the CI is not. Without prediction-interval information, one cannot be sure just how confidently to recommend a given message design choice. For example, suppose there is a statistically significant mean effect favoring message form A over message form B (either in general or under some specific set of moderating conditions); that is, the 95% CI around the meta-analytic mean effect excludes zero. Knowing that the mean effect favors form A does not necessarily imply that every future individual application will favor form A; it's a separate question how consistently form A delivers some advantage over form B. That question cannot be answered by the CI; to see the plausible range of individual effect sizes, the PI is wanted. So if the 95% PI for that comparison of form A and form B also excludes zero, then choosing form A is pretty much a sure thing; it's very unlikely that message form B would ever turn out to be more effective in the relevant circumstances. On the other hand, if the 95% PI straddles zero and so includes both positive and negative values, then one can't count on form A being the better choice; sometimes form B will turn out to be more effective (despite the statistically significant mean effect, that is, even though the 95% CI excludes zero). In short, prediction intervals provide exactly the sort of representation of effect consistency that is wanted for the present application (For some discussion of prediction intervals, see Higgins et al., 2009;Riley et al., 2011;IntHout et al., 2016;Borenstein, 2019, p. 85-93;Nagashima et al., 2019). To be sure, PIs are imperfect; for example, with fewer than ten cases, PIs will understandably be imprecise (see Meeker et al., 2017, p. 180, Figure 10.1). But for speaking to the question of the consistency of expected future effect sizes, PIs provide information that neither familiar heterogeneity indices nor CIs do. Summary Persuasive message designers would hope to learn of message design choices that consistently make for large differences in persuasiveness, either in general (main effects) or contingently (when moderating conditions are specified). Existing metaanalytic evidence is relevant to identifying such choices, but previous reviews of those meta-analyses have focused only on the size of main effects. There has not been systematic examination of the variability of main-effect effect sizes or of the magnitude and variability of effect sizes when moderating conditions are taken into account. The research reported here aims to remedy that lack. But we are at some pains to emphasize: Our research questions concern the relative persuasiveness of alternative message forms, not the absolute persuasiveness of a message or message form. The data of interest in the present project-the effect sizes-do not address questions of how effective persuasive messages are or can be. Some readers have thought that (e.g.,) small effect sizes in the studies under review here indicate that persuasive messages are not very effective. This reflects a misapprehension. In a study comparing the effectiveness of two persuasive message forms (e.g., narrative vs. non-narrative), the effect size of interest describes the difference in persuasive effectiveness between the two messages-not the effectiveness of either message individually or the average effectiveness of the two messages (O'Keefe, 2017). If two messages are both highly effective and so don't differ much in effectiveness, the effect size will be small even though each message was in absolute terms quite effective. To address the questions of interest here-questions about the size and consistency of the differences in persuasiveness associated with message design choices-the effect sizes analyzed provide exactly the sort of data needed. Overview Existing relevant meta-analyses of persuasive message variations were identified. To provide a uniform basis for comparison, each meta-analysis's set of effect sizes (ESs) was analyzed using random-effects procedures to provide a mean ES, a 95% CI, and a 95% PI-both for the main effect of the message variation and for contingent effects (within levels of moderator variables). Literature Search and Inclusion Criteria Meta-analyses of potential interest were identified by searches of the PsycEXTRA, ProQuest Dissertations and Theses, PsycINFO, ERIC, Medline, and Web of Science databases, combining meta-analysis with such terms as persuasion, message, and attitude, through November 2019. Relevant reviews of metaanalyses were also examined to identify potential candidates (e.g., Eisend and Tarrahi, 2016;Hornik et al., 2016Hornik et al., , 2017Rains et al., 2018). Additional candidates were located through examination of textbooks and through personal knowledge of the literature. The meta-analyses of interest were ones that reviewed experimental studies comparing two versions of a message that varied with respect to some specified message property, where relative persuasiveness was assessed; the ESs of interest thus represented relative effects on belief, attitude, intention, or behavior outcomes, or composite relative effects on more than one of these (as when a meta-analysis reported composite ESs, e.g., combining attitude and intention effects). Broadly put, metaanalyses were excluded if they did not speak to the effects of persuasive message design choices. Meta-analyses were excluded if information about the effect sizes and associated sample sizes was unavailable even after correspondence with authors (e.g., Compeau and Grewal, 1998;Reinard, 1998;Floyd et al., 2000;Piñon and Gambara, 2005;Freling et al., 2014). If multiple meta-analyses of a given message variation were available, the one with the largest number of ESs was selected; any meta-analysis based on fewer ESs would, ceteris paribus, provide less accurate estimates of the properties of interest 1 . For similar reasons, if a meta-analysis of a given message variation reported results separately for different persuasion outcomes (e.g., separate results for attitude and for intention), the outcome with the largest number of ESs was selected; different persuasion outcomes have been found to yield equivalent mean effect sizes (O'Keefe, 2013) and hence could be treated as interchangeable. Because "when the analysis includes at least ten studies, the prediction interval is likely to be accurate enough to be useful" (Borenstein, 2019, p. 93), we excluded meta-analyses with fewer than 10 ESs (e.g., Burrell and Koper, 1998;Bigsby and Wang, 2015;Lull and Bushman, 2015, concerning violent content;O'Keefe, 1998, concerning quantification;O'Keefe, 2000, concerning guilt) 2 . Finally, we excluded a meta-analysis if the list of ESs included multiple ESs based on the same message pair or set of participants, as when separate ESs were entered for two different attitude measures from the same participants (e.g., Eisend, 2006). A list of excluded meta-analyses (with reasons for exclusion) is provided in Appendix 1. But to illustrate the application of these principles, a few specific examples may be useful. Braddock and Dillard's (2016) meta-analysis of narrative messages was excluded because the studies reviewed did not have an appropriate comparison condition. This metaanalysis included only studies that compared a narrative message against a control condition lacking the narrative message or any message like it (e.g., no-message controls and irrelevant-message controls). Such studies speak to the question of whether using a narrative message is more persuasive than staying silent on the subject. The present project is focused on a different question: Given that some message is to be used and hence some message design choices faced, which design options are relatively more persuasive? Thus, for example, Shen's et al. (2015) metaanalysis, which reviewed studies comparing the persuasiveness of narrative and non-narrative messages, was included in the present analysis; such studies speak to the question of whether a message designer should favor a narrative or a non-narrative format. Tannenbaum's et al. (2015) meta-analysis was excluded because it compared fear appeals against a combination of a number of different control conditions, including no-message control conditions (see Tannenbaum's et al., 2015Tannenbaum's et al., , p. 1183. Because effect sizes were collapsed across a variety of different control conditions (high-fear message vs. no-message control, high-fear message vs. low-fear message, etc.), the reported mean effect sizes could not be interpreted straightforwardly as bearing on the present research questions. However, White and Albarracín (2018) reported meta-analytic results for a subset of cases concerning specifically the high-fear-vs.-low-fear message contrast; that meta-analysis was included in the present analysis. Noar's et al. (2016) meta-analysis of pictorial vs. text-only cigarette pack warnings was excluded because no reported relevant outcome had a sufficiently large number of cases. This meta-analysis reported results for 17 different outcome variables (see p. 347, Table 3), but only one specific outcome had more than 10 effect sizes (the minimum required for inclusion in the present project): "negative affective reactions" such as disgust or fear, which are not outcomes of interest in the present analysis. Relevant outcomes (such as negative smoking attitudes and intentions to quit smoking) had fewer than 10 effect sizes. Analysis Each meta-analysis identified by these inclusion criteria provided a set of effect sizes and associated sample sizes for a given message variation (design choice). The ESs were accepted as given in each meta-analytic dataset, without being adjusted, deleted, recomputed, or otherwise altered, save for converting all ESs to correlations (rs) for analysis. The ESs were analyzed in three ways. Main-Effects Analysis First, each meta-analysis's set of effect sizes was analyzed using random-effects methods to provide an estimate of the mean effect and the associated 95% CI (Borenstein et al., 2005). The 95% PI was obtained using procedures described by Borenstein et al. (2017; see also Borenstein, 2019, p. 85-93;Borenstein et al., 2009, p. 127-133). When PI widths were analyzed, the width was computed as the simple difference between the upper and lower bounds of the PI. One-Moderator Analysis Second, for each set of effect sizes, results for any reported categorical moderator variables were also examined. Because of our interest in guidance for message design, we examined moderators concerning attributes of messages (e.g., content variations) and audiences (e.g., sex) and excluded those concerning study characteristics (e.g., publication outlet). If a level of such a moderator variable had at least 10 ESs, the mean effect size, 95% CI, and 95% PI were computed for those ESs. For example, in Lee's et al. (2019) meta-analysis of 18 ESs concerning the "that's not all" technique, one moderator examined was the nature of the target request-whether the request concerned product purchase (k = 14) as opposed to volunteering time or donating money (k = 4). We analyzed the product-purchase ESs to see the size and consistency of the persuasive advantage given that contingency, but (because of the small number of cases) did not analyze ESs at the other level of that moderator. Two-Moderator Analysis Third, for each set of effect sizes and moderator variables, results for combinations of two moderating variables were also examined. If such a combination of moderator variables had at least 10 ESs, the mean effect size, 95% CI, and 95% PI were computed for those ESs. For example, in Walter's et al. (2018) meta-analysis concerning the use of humor, 10 ESs were obtained under conditions where the audience's involvement was low and the humor style was satire. RESULTS The 30 message-variation meta-analyses were based on 1,149 studies with 280,591 participants. The mean number of studies per meta-analysis was 38.3; the median was 29. The mean number of participants per meta-analysis was 9,353; the median was 4,422. A total of 337 mean effect sizes were analyzed: 30 representing main effects of the message variations, 93 representing effects of those variations under one moderating condition, and 214 representing effects under a combination of two moderating conditions. appear in the Supplementary Materials, in Appendix 3 (for onemoderator effects) and Appendix 4 (two-moderator effects) 3 . Appendix 5 gives results for the single largest mean ES for each message variation's one-moderator and two-moderator contingencies, regardless of whether the mean ES was statistically significant; Appendices 6, 7 provide results for statistically significant mean ESs for each message variation for the one-moderator (Appendix 6) and two-moderator (Appendix 7) contingencies, regardless of the size of the mean ES. The magnitudes of the mean ESs are summarized in several ways in Table 2, which reports values for main-effect ESs (all such effects and the statistically significant effects), one-moderator ESs (all such effects, the largest mean ESs for a single moderator level, and the statistically significant effects), and two-moderator ESs (all such effects, the largest mean ESs for a combination of two moderator levels, and the statistically significant effects). For each of those categories, the values reported are the simple unweighted mean absolute-value mean effect size, the simple unweighted median absolute-value mean effect size, the mean upper limit of the 95% CI, and the median upper limit of the 95% CI 4 . These latter two properties are of interest because the Table 3, which reports values for main-effect ESs (all such effects and the statistically significant effects), onemoderator ESs (all such effects, the largest effects, the statistically significant effects, and the narrowest PIs), and two-moderator ESs (all such effects, the largest effects, the statistically significant effects, and the narrowest PIs). Appendix 8 gives detailed results that specify the narrowest PIs for each message variation's onemoderator and two-moderator contingencies. Main Effects The 30 main-effect mean ESs ranged from −0.002 to 0.287 ( Table 1). The median mean ES for all 30 cases was 0.10, and for the 22 statistically significant cases was 0.13. The corresponding median upper limits of the 95% CI were 0.16 and 0.19 ( Table 2). One-Moderator Effects The 93 one-moderator mean ESs ranged from −0.049 to 0.321 (Appendix 3). The median mean ES for all 93 cases was 0.07, with a median 95% CI upper limit of 0.14 5 . Of the 30 message variations reviewed, only 15 had any onemoderator levels with at least 10 ESs (i.e., only 15 qualified for analysis). For each of those 15 variations, the largest onemoderator ES was identified (Appendix 5). For those 15 largest mean ESs, the median mean ES was 0.13, with a median 95% 5 The median (or mean) mean ESs for moderator effects (whether one moderator or two) are reported for completeness but are arguably uninformative. For example, the mean one-moderator effect for a given message variation will commonly roughly equal the main effect for that variable. (To see this concretely: Imagine the main effect mean ES for a given message variation is r = 0.15. And suppose the individual studies divide equally across the two levels of a moderator, with mean ESs of 0.10 and 0.20 for the two levels. Ceteris paribus, the average across those will be about 0.15.) However, examination of subsets of moderator effects-such as the largest ones or the statistically significant ones-is instructive. CI upper limit of 0.23. For the 52 statistically significant onemoderator mean ESs (Appendix 6), the median mean ES was 0.10, with a median 95% CI upper limit of 0.16 ( Table 2). By comparison, for specifically the 15 message variations that contributed to those (largest and statistically significant) onemoderator effects, the median main-effect mean ES was 0.13, with a median 95% CI upper limit of 0.15. The 15 largest one-moderator mean ESs were also used to get a sense of the potential for individual moderators to yield larger mean effect sizes, by comparing each such mean ES against the corresponding main-effect mean ES. The median amount of increase in the mean ES was 0.06 (e.g., from 0.11 to 0.17). Two-Moderator Effects The 214 two-moderator mean ESs ranged from −0.071 to 0.276 (Appendix 4). The median mean ES for all 214 cases was 0.06, with a median 95% CI upper limit of 0.14. Of the 30 message variations reviewed, only nine had any twomoderator levels with at least 10 ESs (i.e., only nine qualified for analysis). For each of those nine variations, the largest twomoderator mean ES was identified (Appendix 5). For those nine largest mean ESs, the median mean ES was 0.13, with a median 95% CI upper limit of 0.22. For the 72 statistically significant two-moderator mean ESs (Appendix 7), the median mean ES was 0.10, with a median 95% CI upper limit of 0.18 ( Table 2). By comparison, for specifically the nine message variations that contributed to those (largest and statistically significant) twomoderator effects, the median main-effect ES was 0.07, with a median 95% CI upper limit of 0.11; the median largest onemoderator ES was 0.12, with a median 95% CI upper limit of 0.22. The nine largest two-moderator mean ESs were also used to get a sense of the potential for moderator combinations to yield larger mean effect sizes, by comparing each such mean ES against the corresponding main-effect mean ES. The median amount of increase in the mean ES was 0.07 (e.g., from 0.08 to 0.15). Main Effects For the 30 main-effect mean ESs (detailed in Table 1), the mean 95% PI width was 0.54; the median was 0.54 ( Table 3). The simple unweighted means of the lower and upper bounds of the 95% PIs were, respectively, −0.17 and 0.37; the corresponding medians were −0.16 and 0.37. Twenty-eight (93%) of the PIs included both positive and negative values. One-Moderator Effects For the 93 one-moderator mean ESs (detailed in Appendix 3), the mean 95% PI width was 0.55; the median was 0.53 ( Table 3) To get a sense of the potential for individual moderators to yield narrower PIs, two additional "best case" analyses were conducted (see Table 3). First, for each message variation we compared the width of the PI from the one-moderator level that had the narrowest PI (identified in Appendix 8) against the width of the PI for the corresponding main-effect mean ES (as given in Table 1). Across 15 such cases, the median width of the 95% PI was 0.40 for one-moderator mean ESs (median limits of −0.09 and 0.30) and was 0.52 for main-effect mean ESs (median limits of −0.17 and 0.35). Second, we compared the width of the PI from one-moderator levels that had statistically significant mean ESs (identified in Appendix 6) against the width of the PI for the corresponding main-effect mean ESs (Table 1). Across 52 such cases, the median width of the 95% PI was 0.46 for one-moderator mean ESs (median limits of −0.14 and 0.34) and was 0.52 for main-effect mean ESs (median limits of −0.17 and 0.35). Two-Moderator Effects For the 214 two-moderator mean ESs (detailed in Appendix 4), the mean 95% PI width was 0.58; the median was 0.58 ( Table 3). The simple unweighted means of the lower and upper bounds of the 95% PIs were, respectively, −0.23 and 0.35; the corresponding medians were −0.23 and 0.34. Two hundred eleven (99%) of the PIs included both positive and negative values. For the nine cases representing the largest two-moderator ESs (detailed in Appendix 5), eight (89%) of the PIs included both positive and negative values; for the 72 cases representing statistically significant two-moderator ESs (detailed in Appendix 7), 69 (96%) of the PIs included both positive and negative values. By comparison, for specifically the nine variations that contributed to those two-moderator results, nine (100%) of the PIs for the main-effect ESs included both positive and negative values ( Table 1); for the largest one-moderator ESs among those nine variations, eight (89%) of the PIs included both positive and negative values (Appendix 5). To get a sense of the potential for moderator combinations to yield narrower PIs, two additional "best case" analyses were conducted (see Table 3). First, for each message variation we compared the width of the PI from the two-moderator combination that had the narrowest PI (detailed in Appendix 8) against the width of the PI for the corresponding main-effect mean ES (in Table 1). Across nine such cases, the median width of the 95% PI was 0.30 for two-moderator mean ESs (median limits of −0.12 and 0.18) and was 0.45 for main-effect mean ESs (median limits of −0.17 and 0.27). Second, we compared the width of the PI from two-moderator levels that had statistically significant mean ESs (identified in Appendix 7) against the width of the PI for the corresponding main-effect mean ESs (Table 1). Across 72 such cases, the median width of the 95% PI was 0.54 for two-moderator mean ESs (median limits of −0.17 and 0.36) and was 0.45 for main-effect mean ESs (median limits of −0.17 and 0.27). DISCUSSION These results support three broad conclusions, which in turn yield two sets of implications. Conclusions Briefly: Message design choices don't make much difference to persuasiveness (message-variation mean effect sizes are small), the effect of a given design choice varies considerably from one application to another (message-variation prediction intervals are wide), and moderator variables have little impact on the size and variability of effects. Small Mean Effects Persuasive message design choices yield rather modest differences in persuasive effectiveness. The magnitude of the main effects observed in the present analysis (median mean r = 0.10) is consistent with the parallel results reported in other reviews: median rs of 0.11 (Weber and Popova, 2012) and 0.13 (Rains et al., 2018). But-strikingly-similarly small effects are seen when examining all of the one-moderator effect sizes (median mean r = 0.07), the largest one-moderator effect sizes (0.13), the statistically significant one-moderator effect sizes (0.10), all of the two-moderator effect sizes (0.06), the largest two-moderator effect sizes (0.13), and the statistically significant two-moderator effect sizes (0.10) 6 . Examination of the upper bounds of the 95% CIs is also illuminating, because the upper bound sets a limit on plausible mean (population) values. These results suggest that large mean effects are not to be expected; median upper-bound values never exceed r = 0.23. And that's the case no matter whether one looks at main effects (median upper bound of r = 0.16), all of the onemoderator effect sizes (0.14), the largest one-moderator effect sizes (0.23), the statistically significant one-moderator effect sizes (0.16), all of the two-moderator effect sizes (0.14), the largest two-moderator effect sizes (0.22), or the statistically significant two-moderator effect sizes (0.18). And it should not pass unnoticed that the mean effect sizes reported here-as small as they typically are-might nevertheless be inflated. For example, if the studies reviewed in these metaanalyses were affected by outcome-biased reporting (such that studies with smaller effects were less likely to have been available to be included in the meta-analyses), then these mean effect sizes will exaggerate the actual effects (Friese and Frankenbach, 2020;Kvarven et al., 2020; on the difficulties of detecting such biases, see Renkewitz and Keiner, 2019). In short, there is no evidence that message design choices will characteristically make for large differences in persuasive effectiveness-not even when moderating variables are taken into account. Substantial Variability in Effects For persuasion message variations, the associated 95% PIs are quite wide, usually including both positive and negative values (i.e., straddling zero). And that's the case no matter whether one looks at all of the main effects (where 93% of the PIs straddle zero), the statistically significant main effects (91%), all of the one-moderator effect sizes (97%), the largest onemoderator effect sizes (87%), the statistically significant onemoderator effect sizes (94%), all of the two-moderator effect sizes (99%), the largest two-moderator effect sizes (89%), or the statistically significant two-moderator effect sizes (96%). 6 Imagine (counterfactually) a measure of the absolute persuasiveness of a given message that was scaled as IQ scores commonly are: M = 100, sd = 15. An effect size of r = 0.10 corresponds to d = 0.20, which is a difference of 3 points on such a scale. That's the difference between scores of 133 and 136, or 81 and 84, or 165 and 168. These are not large differences. It is important not to be misled here by statistical significance. Knowing that a given mean ES is statistically significantly different from zero (i.e., knowing that its 95% CI excludes zero) does not provide information about the variability of ESs from one implementation to another. A statistically significant mean ES presumably indicates that there is a genuine non-zero population effect-but that speaks only to the location of the average effect, not to the dispersion of individual effect sizes around that mean. An illustrative example is provided by the message variation contrasting narrative and non-narrative message forms. Under six different one-moderator conditions, there was a statistically significant mean ES favoring narrative messages. But for each of those six, the PI includes negative values-in fact, negative values greater in absolute terms than the positive mean effect (Appendix 6). For instance, when the medium is print, the mean persuasive advantage for narrative forms corresponds to r = 0.055, but the lower bound of the PI is −0.127. And the same pattern is observed for the four statistically significant twomoderator mean ESs for this message variation: for each of those four two-moderator combinations, the PI includes negative values greater in absolute terms than the positive mean effect (Appendix 7). When a PI includes both positive and negative values, the data in hand are consistent with seeing both positive and negative effect sizes in future applications. As these data make clear, it is common for persuasion message-variation PIs to include both positive and negative values. The implication is that one should not be surprised to see one study in which message form A is more persuasive than message form B and another study in which the opposite effect obtains. In short, there is no evidence that message design choices will characteristically yield effects that are consistent in direction from application to application-not even when moderating variables are taken into account. Minimal Effects of Moderating Factors One might have imagined that as moderating conditions are specified, the size and consistency of the effects of design choices would noticeably increase. However, as just discussed, these data indicate moderating variables do not have much effect on either the size or the consistency of such effects. Moderators and Effect-Size Magnitudes To obtain a basis for realistic expectations about the maximum degree to which the consideration of moderating factors might increase the size of the effects of persuasive message design choices, the largest mean ESs observed under moderating conditions are instructive. In that "best case" analysis, there was not much increase in the mean effect size beyond that observed for main effects: when single moderators were considered, the median increase in r was 0.06; for combinations of two moderators the median increase was 0.07. And this is the most that might be expected, because these are the largest mean ESs observed under moderating conditions. Expressed another way: The largest mean ESs found under moderating conditions were indeed numerically larger than those for main effects. But even when two moderators were considered jointly, the median increase (in those largest-mean-ES cases) corresponds to roughly the difference between a mean ES of r = 0.10 and one of r = 0.17. That is, even the largest ESs observed under moderating conditions are typically not dramatically larger (nor dramatically large)-and these are the effects for the atypically large moderating-factor mean ESs. To contextualize these best-case mean effects, consider the median mean ESs associated with the moderator conditions that produced the narrowest PIs ( Table 2) as compared to the median mean ESs for the corresponding main effects (Table 1). When single moderators were considered, across 15 cases the median mean ES decreased by 0.04 (median mean ESs of 0.08 for the onemoderator cases, 0.12 for the corresponding main effects). When two moderators were considered, across nine cases the median mean ES decreased by 0.04 (median mean ES of 0.03 for the twomoderator cases, 0.07 for the corresponding main effects). That is, the moderating conditions that had the most consistent effect sizes had smaller mean ESs (compared to the mean ESs observed for main effects). Moderators and Effect-Size Consistency To obtain a basis for realistic expectations about the maximum degree to which the consideration of moderating factors might reduce the width of prediction intervals, the narrowest PIs observed under moderating conditions are instructive. In that "best case" analysis, the width of the narrowest PIs under moderating conditions (Table 3) did not decrease much compared to that observed for main effects (Table 1). When single moderators were considered, across 15 cases the median width was reduced by 0.12 (median widths of 0.40 for the onemoderator cases, 0.52 for the corresponding main effects). When two moderators were considered, across nine cases the median width was reduced by 0.17 (median widths of 0.30 for the twomoderator cases, 0.47 for the corresponding main effects). And this is the most decrease that might be expected, because these are the narrowest PIs observed under moderating conditions. Expressed another way: The narrowest PIs found under moderating conditions were indeed narrower than those for main effects. But even when two moderators are considered jointly, the median decrease (in those narrowest-PI cases) corresponds to roughly the difference between (say) a PI with limits of −0.18 and 0.28 and a PI with limits of −0.10 and 0.20. That is, even the narrowest PIs observed under moderating conditions are typically not dramatically narrower (or dramatically narrow)-and these are the effects for the atypically narrow moderating-factor PIs 7 . 7 The width of the PI is affected by the number of cases (k), such that smaller numbers of cases produce wider PIs. One might therefore suspect that the substantial width of these PIs when moderators are considered is simply a consequence of the inevitably smaller number of cases on which moderator effects are based. However, the relative width of 95% PIs does not change much once 10 cases are in hand (see Meeker et al., 2017, p. 180, Figure 10.1). Given that all the moderator analyses reported here had at least 10 cases, the substantial width of persuasion message variable PIs when moderators are considered is unlikely to be entirely ascribable to the smaller number of cases. To contextualize these best-case PI widths, consider the width of the PIs associated with the moderator conditions that produced the largest mean ESs (Table 3) as compared to the width of the PIs for the corresponding main effects (Table 1). When single moderators were considered, across 15 cases the median width increased by 0.04 (median widths of 0.56 for the one-moderator cases, 0.52 for the corresponding main effects). When two moderators were considered, across nine cases the median width increased by 0.11 (median widths of 0.58 for the two-moderator cases,0.47 for the corresponding main effects). That is, the moderating conditions that produced the largest mean ESs had less consistent effect sizes (compared to the consistency observed for main effects). Effects of Joint Moderators Notably, considering two moderators jointly did not make for substantially larger mean ESs or substantially more consistent effect sizes compared to what is seen when only a single moderator is considered. For example, for each message variation, the largest mean ESs had a median value of 0.13 when one moderator is considered and 0.13 when two moderators are considered (Table 2). Similarly, for each message variation, the narrowest PIs had a median width of 0.40 when one moderator is considered and 0.30 when two moderators are considered ( Table 3). Now perhaps researchers have not (yet) identified the right moderator variables, that is, ones that permit identification of circumstances under which large consistent effects can be expected. The moderators that are characteristically explored in the meta-analyses reviewed here are, in a sense, surface-level moderators-ones easily coded given the information provided in research reports. But it may be that more subtle moderators are actually at work. Consider, for example, theorizing about gain-loss message framing effects. Rothman and Salovey (1997) suggested that the relative persuasiveness of the two message forms would be influenced by the nature of the advocated behavior: where the advocacy subject is disease detection behaviors, loss-framed appeals were predicted to generally be more persuasive than gain-framed appeals, but where the advocacy subject is disease prevention behaviors, gain-framed appeals were predicted to generally be more persuasive than loss-framed appeals. But subsequently rather subtler approaches have been advanced, such as Bartels's et al. (2010) suggestion that the perceived risk of the behavior, not the type of behavior, moderates the effects of gain-loss message framing. The hypothesis is that when perceived risk is low, gain-framed appeals will be more persuasive than loss-framed appeals, but that when perceived risk is high, loss-framed appeals will have the persuasive advantage. It's not clear just how many relevant studies have been conducted concerning this hypothesis (see Updegraff and Rothman, 2013, p. 671), and at a minimum the evidence is not unequivocally supportive (e.g., Van't Riet et al.'s, 2014; for a review, see Van't Riet et al., 2016). But this hypothesis is an example of how the present conclusions about effect-size magnitudes and consistency might need revision were evidence to accumulate concerning the effects of more refined moderating factors. Along similar lines: Perhaps large consistent effects are not to be found unless one simultaneously considers three or four or six moderators. This possibility, too, cannot be discounted; the present analysis considers at most two simultaneous moderators and thus is unable to speak to whether large consistent effects appear in other circumstances. Were evidence to accumulate about the effects of more complex moderating conditions, the present conclusions might want modification. So: The present results are inconsistent with any expectation that consideration of moderating variables will easily identify conditions under which design choices will consistently yield large persuasive advantages. Even when one takes into account the moderating variables that have been examined, messagevariation mean effects do not noticeably increase in size and effect sizes do not noticeably increase in consistency. But one should be open to the possibility that future research might underwrite different conclusions. Summary Persuasion message variations have small and highly variable effects. This might lead some to be discouraged or dispirited, but such reactions would bespeak expectations that turned out to be unrealistic. No one is disappointed to learn that the mass of the electron is extraordinarily small-not unless they expected it would be larger. Similarly here: Any feelings of disappointment reflect (implicit) expectations that have turned out to not be aligned with reality. We do want to emphasize: The effect sizes analyzed here describe the relative persuasiveness of two messages, not the absolute persuasiveness of a single message. These results do not speak to questions about whether persuasive messages are or can be effective, but rather to questions about the differences between messages in persuasiveness-and thus to the consequences of message design choices. And it's no good turning away from the apparent facts about the effects of message design choices. The differences in persuasiveness between alternative message forms are rather small and the individual effect sizes are quite variable, even when moderating factors are taken into account. And that, in turn, has implications both for message design and for persuasion research. Realistic Expectations Message designers should have realistic beliefs about just how much they can improve effectiveness by their choices. It certainly is possible that in a given application, a message design choice might make a very large difference to effectiveness. But that very same design choice in another application might produce not just a weaker effect, but a negative (opposite) effect. And that's true even when the message variation has a statistically significant positive mean effect, and even under well-specified moderating conditions. Message designers should expect that their choices might on average provide incremental improvements, but not consistent dramatic ones. Similarly, those advising message designers should be modest, cautious, humble. If message-variation effects were substantial and entirely consistent, one could be unreservedly confident in one's recommendations about persuasive message design: "Always choose message form A rather than message form B. Not only will A always be more persuasive, it will be a lot more persuasive." But given the results reported here, advisers will want to be rather restrained, even if there is a statistically significant metaanalytic mean difference in persuasiveness between the message kinds: "You should probably choose message form A rather than message form B, because on average A is more persuasive. However, A is likely to be only a little more persuasive than B, not enormously so. And A will not always be more effective than Bsometimes B will turn out to have been the better choice. So my advice is that you choose A, because you should play the odds. But it's not a sure thing, and it probably won't make a huge difference to persuasiveness." Thus, our claim is not that persuasive message design choices don't matter at all. On the contrary, design choices do make a difference. After all, there are statistically significant differences between the persuasiveness of various message forms; that is, there are genuine (non-random) differences here. Our point is that the difference made-the difference in persuasiveness between two design options-is not large. And although message design choices don't make for large differences in persuasiveness, even small differences might, in the right circumstances, be quite consequential (for a classic treatment, see Abelson, 1985; see also Prentice and Miller, 1992). For example, in close elections, a small effect on a small number of voters can be quite decisive (Neuman and Guggenheim, 2011, p. 172-173). More generally, small effects can have significant consequences when examined over time and at scale (Götz et al., 2021). So persuasive message design choices can be important, even though-demonstrably-they make only a small difference to message persuasiveness. Combining Design Features Imagine a circumstance in which, on average, two-sided messages are more persuasive than one-sided messages and gain-framed appeals are more persuasive than loss-framed appeals and narratives are more persuasive than non-narratives. Even if each feature individually doesn't boost persuasion that much, a message designer might hope that a two-sided gain-framed narrative could yield a rather large persuasive advantage over other combinations (especially a one-sided loss-framed non-narrative). However, there is no guarantee that the effects of design features will combine in a simple additive fashion. Direct empirical evidence on this question does not appear to be in hand, but related research-concerning the effects of combining different kinds of interventions (e.g., different behavior change techniques)-suggests a complicated picture. Compared to single interventions, combinations have been found to be more effective (e.g., Huis et al., 2012;Griffiths et al., 2018), less effective (e.g., Jakicic et al., 2016;Wildeboer et al., 2016), and not different in effectiveness (e.g., Luszczynska et al., 2007;Brandes et al., 2019). So it might be the case that at least sometimes, combining message design features will yield larger persuasive advantages, but in the absence of direct evidence, enthusiasm for this prospect should be limited. Implications for Persuasion Research Primary Research Researchers studying persuasive message effects should design primary research that accommodates these results in two ways. First, researchers will want to plan for larger sample sizes. Only much larger samples will provide sufficient statistical power for detecting the likely (small) population effects. For example, across the 30 main-effect mean ESs, the median effect size (r) was 0.10. To have statistical power of 0.80 (two-tailed test, 0.05 alpha) given a population effect of r = 0.10 requires 780 participants (Cohen, 1988, p. 93). In the studies included in the meta-analyses reviewed here, the median sample size was 161. Even if one anticipates larger effect sizes under some moderating condition, a substantial number of participants will be needed. For example, with a population effect of r = 0.20, obtaining power of 0.80 (two-tailed test, 0.05 alpha) requires 195 participants. So if one expects an effect size of 0.20 when (say) involvement is high, a design with both high-involvement and low-involvement conditions will require a total of 390 participants. And if one expects to find that size of effect only when (say) involvement is high and communicator credibility is high, the design will require 780 participants. Larger samples are also wanted for another reason. Although it is now widely understood that small sample sizes reduce the chances of finding genuine population effects, it seems not so well-appreciated that low power also increases the chances of false-positive findings (Christley, 2010; see also Button et al., 2013). Thus, a small-sample study that produces a statistically significant effect might well be misleading. In short, both to enhance statistical power and to minimize the chances of falsepositive results, persuasion researchers need larger samples. Second, more studies are needed-especially ones addressing moderating conditions. For all that moderator variables are commonly assumed to be important influences on persuasion message-variation effects, there is remarkably little good evidence concerning moderator effects for most message design choices. Of the 30 message variations reviewed here, only 15 (50%) had sufficient data (k ≥ 10) to assess the potential role of single moderators, and only nine (30%) had sufficient data to assess the potential role of two moderators considered jointly. It appears that even among relatively well-studied message variations-sufficiently well-studied to have merited metaanalytic attention-there is commonly not sufficient evidence in hand to speak with any confidence about the role of moderating factors. The importance of better evidence about moderating conditions is underscored by the commonality with which message-variation 95% PIs include both positive and negative values. To illustrate, consider a biomedical parallel. Suppose a new medical treatment, on average, improves patients' health (there's a statistically significant mean positive effect), but some patients are harmed by the treatment. In such a situation, researchers would presumably want to figure out exactly what leads to those negative outcomes-what conditions foster such results-so as to be able to better indicate when the treatment should be used and when it should be avoided. Similarly here: Given that the PIs associated with persuasive message variations commonly include both positive and negative values, sound decisions about persuasive message design will require developing an understanding of the conditions that foster the different results (in which message form A is generally more persuasive than message form B, but sometimes the opposite effect occurs). And if thoroughly consistent effects-as represented by a PI that does not straddle zero-are likely to be found only when multiple moderating conditions are specified, then acquisition of better research evidence will be crucial. Replication and Research Synthesis Much attention has been given in recent years to apparent failures to replicate previous social-scientific research findings (e.g., Open Science Collaboration, 2015), with replication failures sometimes being interpreted as an indication that the originally claimed effect does not really exist. However, the present results suggest that replication failures should be expected to occur routinely in persuasion message effects research. De Boeck and Jeon (2018, p. 766) put it succinctly: "if effects vary from study to study, then replication failures are no surprise" (see, relatedly, Patil et al., 2016). Indeed, given the frequency with which the prediction intervals reported here encompass both positive and negative effects, it would be astonishing if apparent replication failures did not occur. Unfortunately, current research design practices do not appear to acknowledge, or be well-adapted to, this state of affairs. Experimental studies of persuasive message variations typically use a single concrete message to represent an abstract message category. So, for example, an experiment comparing gain-framed and loss-framed appeals will typically have just one example of each (a "single-message" design). But such a research design obviously cannot provide good evidence about whether any observed effects generalize across messages. Consider the parallel: A researcher hypothesizes that on average men and women differ with respect to some attribute, but designs a study that compares one particular man and one particular woman; that design is plainly not well-suited to provide relevant evidence, because claims about a general category of people require evidence from multiple instances of that category. Similarly: A researcher hypothesizes that on average gain-framed and loss-framed messages differ in persuasiveness, but designs a study that compares one particular gain-framed appeal and one particular loss-framed appeal; that design is plainly not well-suited to provide relevant evidence, because claims about a general message category require evidence from multiple instances. Such single-message designs invite apparent replication failures. Thus, these results point to the value of multiple-message designs, that is, designs with multiple message pairs representing the contrast of interest (for some discussion, see Kay and Richter, 1977;Jackson and Jacobs, 1983;Thorson et al., 2012;Slater et al., 2015;Reeves et al., 2016). Multiple-message designs effectively have built-in replications of messages, providing a stronger basis for dependable generalizations. Data from such designs can be analyzed in ways that parallel meta-analytic methods, such as treating message as a random factor (Clark, 1973;Fontenelle et al., 1985;Jackson, 1992;Judd et al., 2012Judd et al., , 2017. Multiple-message designs offer some protection against the possibility that observed effects do not generalize across messages, but they cannot address other potential limitations (e.g., using human samples that are limited in some ways; Henrich et al., 2010). Even so, greater use of such designs plainly could accelerate the process of reaching dependable conclusions about persuasive message effects. Summary Persuasive message designers would like to know of message design choices that will consistently produce a large increase in persuasiveness-either in general (main effects) or contingently (under specified moderating conditions). It's been known for some time that general persuasion message-variation mean effect sizes are not large. The current results suggest that even under well-specified moderator conditions, choosing one message form over another characteristically makes for only a small average difference in persuasiveness. Moreover, such choices do not produce a persuasive advantage consistently-neither generally nor contingently. Message designers and researchers should plan accordingly. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS DO'K and HH contributed equally to the conception and design of the project, acquisition of data, and analysis and interpretation of data. DO'K wrote the initial draft. DO'K and HH contributed equally to revisions. All authors contributed to the article and approved the submitted version.
2021-06-29T13:26:26.607Z
2021-06-29T00:00:00.000
{ "year": 2021, "sha1": "f1bb1e042dadbd30bb377074f1f48df8b6eddddf", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.664160/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f1bb1e042dadbd30bb377074f1f48df8b6eddddf", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119627989
pes2o/s2orc
v3-fos-license
Generalized Dirac bracket and the role of the Poincar\'e symmetry in the program of canonical quantization of fields 2 In this article the methods of canonical analysis and quantization that were reviewed in the first part of the series are applied to the case of the Dirac field in the presence of electromagnetic interaction. It is shown that the quantization of electrodynamics, which begins with a given Lagrangian and ends by perturbative calculation of scattering probability amplitudes, can be performed in the way that does not employ Poincar\'e symmetry of space--time at any stage. Also, the causal structure is not needed. I. INTRODUCTION In this second part of the sequel, I shall apply the canonical formalism reviewed in the first part to the case of electrodynamics with spinorial matter. I shall begin with the discussion of constraints and gauge transformations in Section II. The issue of integrating infinitesimal gauge transformations to the finite form will be addressed and the total and extended Hamiltonian formalisms confronted. In Section III, the issue of consistent imposition of gauge conditions will be discussed on the simplest example of Coulomb gauge. The generalized Dirac bracket (GDB) will be constructed for this case. Then equations of motion will be discussed. Since they will appear to be by far too complicated to be exactly solved, the transition to the interaction picture will be necessary. When this is done, the equations simplify immensely and straightforward Fock quantization can be applied to the interaction picture fields. Knowledge of this interaction picture representation will appear to be sufficient for the perturbative description of scattering processes. In Section IV, the two examples of Compton scattering and electron, positron −→ muon, anti-muon scattering will be discussed. The elements of the S matrix in the lowest non-trivial order (i.e. the second order in fine structure constant) will be explicitly computed. No Feynman rules will be postulated, although the relation of the calculations to the standard quantization based on Feynman diagrams will be explained. II. THE STRUCTURE OF CONSTRAINTS OF ELECTRODYNAMICS The theory of electromagnetic interaction is interpreted as a gauge theory of a U (1) group. The Lagrangian for fermions in the presence of this interaction is obtained through the minimal coupling procedure. Explicitly, L = i 2 ψγ a D a ψ − D a ψγ a ψ − mψψ − 1 4 F ab F ab = L D + L EM − eA a ψγ a ψ, , D a ψ := (∂ a + ieA a )ψ, where e is the electric charge and m represents the particle's mass. In order for the covariant derivative D a ψ to transform as D a ψ → D ′ a ψ ′ = e −ieλ D a ψ under the U (1) gauge transformation ψ → ψ ′ = e −ieλ ψ, the U (1) connection one-form (electromagnetic four-potential) needs to transform as A a → A ′ a = A a + ∂ a λ. Clearly, the Lagrangian is then invariant under these transformations. The fields of the theory are ψ, ψ and A a . The equations for conjugated momenta are from which the primary constraints follow There is a possible source of confusion in the notation, since the latter γ is used to denote the Dirac matrices, as well as first class constraints (I will argue that γ 1 is first class later on). The confusion can be avoided if one remembers that the Dirac matrices are denoted by γ with superscripts and the constraints are labeled by subscripts. The canonical Hamiltonian is given by H = Hd 3 x, H = πψ +ψπ + π aȦ a − L = ψ −iγ j ∂ j + m ψ + 1 2 π j π j + 1 4 F ij F ij + eA j ψγ j ψ + A 0 eψγ 0 ψ − ∂ j π j + ∂ j A 0 π j + i 2 ψγ j ψ (II. 4) and the total Hamiltonian is where u ′ s are arbitrary multipliers (note that u 1 and u 2 are matrices). The last component of H in (II.4) is a threedivergence that results in a surface term in H. Such terms do not contribute to the functional derivatives of H with respect to fields, since it is common to consider variations that vanish on the boundary of the integration region when defining variational derivatives. If the fields themselves (not only their variations) vanish at spatial infinity, which is usually assumed in the case of flat space-times, then the surface terms simply vanish thus not giving any contribution to the total energy. (III. 6) The GPB is given by (II.6) where the upper sign applies whenever at least one of the variables F , G is even and the lower one corresponds to F and G odd (see formula (III.6) of [14] and the discussion therein). The consistency conditions for the time evolution of constraints are (II.7) The first equation does not depend on u ′ s and hence gives rise to a secondary constraint The second and the third equation of (II.7) yield merely the restrictions on u ′ s of the form The bracket ofγ 2 with H T is This is yet another restriction on u ′ s, but it is easy to verify that it is satisfied automatically if u 1 = U 1 and u 2 = U 2 (see (II.9)). Hence, neither farther restrictions on u ′ s nor additional constraints are produced. The consistency algorithm is accomplished. Let us now investigate what class do the constraints belong to. Clearly γ 1 commutes with all the others and hence is first class. The brackets of the remaining constraints are nontrivial (II.11) It seems that there are three families of second class constraints, χ 1 , χ 2 ,γ and one family of first class ones, γ 1 . However, as explained at the end of Sec.II.A of [14], the constraints are well separated if the number of first class constraints is possibly large, i.e. no first class constraints are hidden in the second class ones. Are these constraints well separated? To verify this, let us try to construct a first class constraint γ from a linear combination of χ 1 's, χ 2 's andγ's where λ's are matrices and κ a scalar function. In order for this constraint to commute with all the others, it is necessary the the following relations between λ 1 , λ 2 and κ hold They are sufficient for γ to be first class. Hence, a first class constraint can be constructed. Since κ remained completely arbitrary, the family of first class constraints was constructed. Since the system of constraints χ 1 , χ 2 , γ 1 ,γ 2 is equivalent to χ 1 , χ 2 , γ 1 , γ 2 , we are allowed to use the latter. To summarize, the constraints of the theory are where γ's are first class and χ's are second class. These constraints are well separated, since it is impossible to construct a first class constraint from linear combination of χ 1 's and χ 2 's. Using (II.9), the first class Hamiltonian H ′ can be calculated (II.17) The total and extended Hamiltonians are then where u 0 and w are arbitrary functions. A. Equations of motion (II.19) The first four equations a, b, c, d, e, f, g, h follow from evaluation of GP of the variables with H E and the equations i, j, k, l are the constraints. A high degree of redundancy can be seen in this system. The equations i, k, l and b can be used to eliminate π 0 , π, π and π j from the remaining ones. In the case when w = 0, (II.20) and (II.22) are Maxwell equations and (II.22) reduce to the U (1)-covariant Dirac equation of spinor electrodynamics. In the w = 0 case the equations, when written in terms of A a , do not assume the usual form of Maxwell equations. This example reflects a general rule that it is the dynamics generated by H T , and not H E , which is equivalent to the Euler-Lagrange equations of the initial Lagrangian (see the discussion below the formula (II.20) of [14]) . However, if A 0 is transformed intoà 0 := A 0 − w then, in terms of the quantitiesF ab and Dψ that containà 0 in place of A 0 , the equations assume the standard form Whether these equations are equivalent to Maxwell equations or not depends on how the physically measurable electric and magnetic fields are defined in terms of A a . The correct definition in the extended formalism is where ε ijk is antisymmetric symbol with ε 123 = 1. Note that the position of spatial indexes is important, since in the metric convention (+, −, −, −), which is used, shifting a spatial index leads to the change of sign. B. Gauge transformations The time evolution of any dynamical variable F is given bẏ and depends on arbitrary functions u 0 and w. The transformations corresponding to changes in these functions ought to be interpreted as gauge transformations, as explained in Sec.II.B of [14]. From (II.25) it is clear that the action of a general such transformation onḞ is The transformations corresponding to the change in u 0 are said to be generated by the constraint γ 1 , whereas those following from the change in w are generated by γ 2 . Imagine that F (t) andF (t) correspond to the dynamics obtained for the choice of arbitrary functions as u 0 , w andũ 0 = u 0 + δu 0 ,w = w + δw respectively. Assume that F (t 0 ) =F (t 0 ) at some instant t 0 . The difference δF (t) =F (t) − F (t) corresponds to the unphysical gauge transformation. But how can this difference be calculated explicitly? If t = t 0 + τ and F is analytic, then where F (n) denotes the n-th time derivative of F . Up to first order in τ , this change can be easily evaluated by means of (II.26). But what if we wished to find a finite form of a gauge transformation? (II.28) To obtain variations of higher derivatives, differentiate each side of (II.19). Then the second time derivatives will be expressed by the fields and their first derivatives, whose variations are already known. Variation of the resulting system leads to δÄ 0 =u 0 , δÄ j = ∂ j (δu 0 − δẇ), , δψ = −ie(δu 0 − δẇ)ψ − e 2 (δw) 2 ψ + 2ieδwψ, , δπ j = 0 (II.29) (variations of the derivatives of the remaining fields can be found by conjugations and application of the constraints). Note that δu 0 and δw are not infinitesimal and therefore some care is required in calculating the variations. One should simply calculate the difference, e.g. δψ =ψ −ψ, so that the terms proportional to the higher powers of δu 0 and δw are not omitted. The variations of higher derivatives can be found by continuation of this iteration procedure. The Tylor series that emerges can by shrunk to (II.30) The transformation of π j can be found from (II.19b). It follows that δπ j (t, x) = 0. Hence, it is π j = F j0 + ∂ j w, and not F j0 , which is gauge invariant and hence can be interpreted as a measurable physical electric field E j . If this interpretation is assumed than the equations of motion that follow from the extended formalism, when expressed in terms of measurable quantities E, B, j a = eψγ a ψ are precisely the Maxwell equations (note that from (II.30) it follows that j a is indeed gauge invariant). III. COULOMB GAUGE Now that we have all the gauge freedom explicitly described, we should proceed to quantize the theory. Since the only second class constraints are χ 1 and χ 2 , the generalized Dirac bracket is given simply by the formula (III.23) of [14], although the generalized Poisson bracket is now given by (II.6). If GDB's are promoted to the commutators of the operators in the quantum theory, the second class constraints can be consistently interpreted as strong operator equalities. Were there no first class constraints, we could try to quantize the theory in much the same way as we did for the free Dirac field in [14]. But the first class constraints are the obstacle. They cannot be interpreted as strong operator equations, since their GDB's with other dynamical variables do not vanish in general. One method of implementation of these constraints in the quantum theory is to demand that the physical Hilbert space of physical quantum states is a subset of a larger kinematical Hilbert space defined by the condition γ a |ψ = 0, where a enumerates all the first class constraints. This method was originally proposed by Dirac [2] and gave rise to the BRST quantization. However, I shall use a different method, which allows the computation of physically measurable quantities most quickly. This is the fixed gauge quantization. In this approach, one simply adds another constraint to the theory, called gauge condition, which results in all the constraints being second class at the end. A gauge condition is any relation between the q's and the p's of the form which is accessible, i.e. any point (q, p) can be transformed by gauge transformation to the one that satisfies (III.1). Also, we wish that the gauge condition, after being inserted to the consistency algorithm together with other constraints, eliminate all the gauge freedom. Otherwise, we need to impose farther gauge conditions or use the Dirac quantization anyway. A shall consider the simplest condition, which is the Coulomb gauge This is clearily accessible by the transformation (II.30), since for any A one can construct One only needs to take To see this, use the identity Let us now step back to the point of the analysis in which the primary constraints were just found, eq. (II.3). We shall now add the gauge condition (III.2) to the set of primary constraints (the reader is encouraged to find out that the results would not alter if χ was interpreted as a secondary constraint). Hence, the primary constraints and the total Hamiltonian are now where H is still given by (II.4). Now that the set of primary constraints and H T have changed, it is necessary to rerun the consistency algorithm for the time evolution of constraints. As before, the bracket of π 0 with H T gives the constraintγ 2 and requiring the brackets of χ 1 and χ 2 with H T to vanish leads to the restrictions on u 1 and u 2 (II.9). However, calculation of the bracket of χ with H T results in a new constraint So, in the first run of imposing consistency conditions the two secondary constraintsγ 2 and φ were found and the restrictions (II.9) on u 1 and u 2 . Continuing with the algorithm forγ 2 and φ one gets If (II.9) is used (which is allowed), the first equation gives the restriction on u The second equation than gives Hence, no farther constraints were produced and the consistency algorithm is now finished. Instead ofγ 2 the constraint γ 2 = ∂ j π j + ie(πψ − ψπ) can be used, as before. The commutation relations of the constraints are now the following: π 0 commutes with everything but φ (III.11) χ 1 and χ 2 commute with all the other constraints and the bracket of χ 1 with χ 2 is the same as before. Finally, the bracket of χ with φ is Hence, all the constraints are now second class, as desired. It is straightforward to check that these constraints do not hide any first class ones, i.e. it is impossible to construct a first class constraint from a linear combination of them. A. The generalized Dirac bracket in the Coulomb gauge Before calculating the GDB, it is useful to rearrange slightly and rename the constraints. In the following I shall use Hence, instead of φ I will now use χ 6 = φ + γ 2 . This is because now the constraints can be grouped into pairs χ 1 , χ 2 ; χ 3 , χ 4 ; χ 5 , χ 6 ; such that the constraints in a given pair have vanishing brackets with those in the remaining pairs. The matrix C αβ will then acquire a block-diagonal form, which facilitates the calculation of its inverse. Explicitly, the non-vanishing brackets are (III.14) The inverse matrices to these blocks are then Here a ij are completely arbitrary numbers. It may be surprising that the inverse matrices are not determined in a unique way and that they even do not inherit the symmetries of the matrices to which they are inverse. This is because these are infinite-dimensional matrices to which standard theorems of linear algebra do not apply. That the results (III.15) are correct for arbitrary values of the constants a ij can be verified by direct computations, e.g. That the term proportional to a 34 vanishes follows from the following reasoning: imagine that the area of integration is a bounded open subset of Ω ⊂ R 3 such that z ∈ Ω. Then the application of Gauss theorem (which is assumed to hold for distributions) allows to rewrite the integral of △δ( z − y) as a flux of the gradient ∇ y δ( z − y) through the boundary ∂Ω. But the Dirac delta and its derivatives vanish everywhere except the points in which their argument is zero. Hence, the integration of ∇ y δ( z − y) with respect to y over the region that does not contain z (such as ∂Ω) must necessarily give zero. Now for any z ∈ R 3 one can find Ω ∋ z and decompose the integral over R 3 into the one over Ω and R 3 \ Ω. The integral over the complement of Ω vanishes, since there the argument of △δ is always nonzero. This completes the argument. So, the arbitrary constants appeared in the inverse matrix C αβ . Will then the GDB also include them? The answer is no! Recall that the the purpose for the particular construction of GDB was to enable consistent replacement of the brackets by the commutators or anti-commutators of operators, dependently on whether the variables are even or odd. In order for this to be possible, the GDB was required to posses appropriate symmetries. This symmetries will not be satisfied if the inverse matrix (III.15) is used in the construction, unless So the freedom in the construction of the GDB is now reduced to the two parameters a 34 and a 56 . The partially solved formula for the GDB that is convenient for performing farther calculations is then However, another requirement for the GDB was that it is consistent with all the second class constraints. It appears that there is a problem with (III.18) with this respect. To see this, take F = A 0 ( z) and G = d 3 xλ( x)ψ( x), where λ is arbitrary function that does not depend on the canonical fields. It follows from (III.18) that But the constraint χ 6 tells us that △A 0 = ie(πψ − ψπ), which, under the physically reasonable assumption that A 0 is bounded everywhere, can be integrated to yield where the constant a ∞ does not contribute anything to the brackets and fact could be set to zero without any loss of generality in the subsequent analysis. If (III.20) is used, the calculation of the bracket now gives The bracket (III.18) of a dynamical variable with this integrated constraint does not vanish in general, unless a 56 = 0. But how it can be that the bracket with χ 6 vanishes but the bracket with the integral of χ 6 does not? Does this fact contradict the linearity of the bracket? This apparent paradox can be traced back to the following calculation (certainly incorrect) (III.23) The first equality stems from the fact that for any y ∈ R 3 we have d 3 x△δ( x − y) = 0 (use Gauss theorem). That the result is wrong can be seen by substituting into the RHS of (III.23). Then the integral of the first component of (III A) vanishes, since this component is the divergence of a vector field that vanishes everywhere except the point y = x. The integral of the second component of (III A) gives −1, since this component is just equal to −δ( z − y)δ( y − x), on account of (III.4). Hence, we finally obtain from (III.23) that 0 = −1, which seems incorrect. The mistake was made in the second equality of (III.23), where I tacitly assumed that the order of integration with respect to x and y can be swapped. It appears that it can not. This is the reason why the linearity of the brackets that involve integrations in their structure cannot be naively exploited. In order to be able to use both the differential and integrated form of the constraints consistently with the bracket, I will set a 34 = a 56 = 0. Finally then the GDB does not contain any arbitrariness and is given by (III.25) or, even more explicitly, . (III.26) From now on, the canonical variables A a , π a will be referred to as electromagnetic variables, whereas ψ, ψ, π and π as spinor variables. 1) If both F and G depend on spinor variables only, then the only contribution to the bracket comes from the first line. The bracket is the same as the one calculated in the free Dirac field theory considered in [14]. 2) Consider electromagnetic variables. It is clear that from The bracket of π 0 with any other variable is certainly zero, since π 0 is a constraint. The bracket of A i with any variable G is from which the bracket for the canonically conjugate pair of electromagnetic variables follows The bracket of A 0 with G EM that depends on the electromagnetic variables only is The integrand is a divergence of a vector field and hence the expression vanishes if δG EM /δπ 0 ( x) has bounded support, or just tends to zero sufficiently fast with x going to infinity. Note however that if, say, G EM = λ( x)π 0 ( x)d 3 x, where λ does not tend to zero at infinity, then the bracket of G EM with A 0 ( y) will not vanish! So, the smeared constraints are consistent with the bracket (III.26) if and only if the smearing functions decrease sufficiently rapidly at infinity. 3) The bracket of A 0 with G SP that depends on the spinor variables only is 4) The bracket of F SP that depends on the spinor variables only with π j ( y) is which is the same as −[F SP , ∂ j A 0 ( y)] GD . This observation led Weinberg [3] to define a combined variable π j ⊥ := π j + ∂ j A 0 , (III.32) which has trivial bracket with spinor variables Note that from χ 3 = 0 and χ 6 = 0 it follows that The bracket of π j ⊥ with arbitrary dynamical variable F is from which it follows that (III.36) B. The equations of motion Since the GDB is now used that is consistent with the second class constraints and all the constraints are now second class, we can freely use them to simplify the form of observables that are of interest. For example, the distinction between canonical, first class, total and extended Hamiltonians is now spurious: the simplest form of the Hamiltonian can be given as (III.37) The equations of motion for the canonical variables arė The equations certainly need to be supplemented by the constraints. The constraints χ 1 and χ 2 can be used just to eliminate the variables π and π from the formalism once and for all. Similarly, χ 5 tells that π 0 = 0 is not a physical degree of freedom. The constraint χ 3 then reduces to the Gauss law ∇ π = eψγ 0 ψ and χ 6 tells that A 0 is not an independent variable but a functional of matter fields For simplicity, I imposed on A 0 the condition of vanishing at infinity. The general bounded solution to the constraint is obviously (III.20). The reader is encouraged to verify the effects produced by nonzero constant a ∞ of (III.20) in the final results. Note farther that (III.32) allows for the elimination of π j in favor of π j ⊥ and ψ, ψ. Then π j ⊥ can be eliminated in favor ofȦ j on account of the second equation of (III.38). Finally, the only electromagnetic physically important fields are A j 's, which are restricted by the Coulomb gauge condition χ 4 = ∂ j A j = 0, so we end up with the two degrees of freedom of the electromagnetic field, as desired. C. Transition to the interaction picture In order to construct the quantum theory in the case of the free Dirac field [14], we had to find a general solution to the evolution equations for fields. The arbitrary operator coefficients in the general solution appeared to obey an exceptionally simple commutation rules with themselves and the Hamiltonian and other physically important observables such as momentum or electric charge operator (although these commutation relations were not explicitly verified in [14]). This simplicity was crucial and allowed the construction of Fock space carrying the representation of all the commutation relations following from the GDB. Even the first look on (III.38) allows to see that no such simple solution to the system of equations for QED in the Coulomb gauge is possible. One could try to differentiate the second equation and then use the third in order to eliminate π j from the system. Also, the first equation can be used to eliminate A 0 . This can be done, but the resulting relation between A j and ψ, ψ is fairly too complicated to be exactly solvable. The way out of this difficulties is to pass to the interaction picture and calculate physical quantities perturbatively. To accomplish this, let us decompose the Hamiltonian (III.37) into the free part H 0 and the interaction part H I according to . Let F (t) be a dynamical variable, whose time evolution is determined in a usual way, by H. By the interaction picture of F we will understand a variable F (I) (t) whose evolution is determined by H 0 and whose value at t = 0 is equal to the value of F at this instant: for any t (use the Tylor expansion). Another immediate consequences of the definition are that the time derivative of an interaction picture variable also evolves in the interaction picture (the same is obviously true for spatial partial derivatives). Finally, for any variables F and G the following implication holdṡ for any t. This latter corollary will be used repeatably below (the relations between the derivatives at t = 0 will be obtained and the relations between the interaction picture fields at any time will then be assumed). Using (III.37) and (III.26), the GDB of any dynamical variable F with H 0 can be calculated (III.44) This simplification follows from From (III.44) it follows that if F SP depends on the spinor variables only then its bracket is the same as the one that would be obtained in the theory of the free Dirac field discussed in [14]. Hence, the interaction picture field ψ (I) satisfies the free Dirac equation For other fields, the calculations give (III.47) The implication (III.43) was used. Note that the evolution of A 0 in the interaction picture is nontrivial, contrary to what seems to have been suggested in [3]. However, the point is that A 0 is no longer necessary, since the first and the second equations now give a simple equation for A (I)ï [14]. The general solution to (III.48) and (III.49) is The assumption is made that k 0 = | k| and dΓ k = d 3 k/2π 3 2k 0 , in complete parallel with the conventions of [14], which were adapted to the case of massless particles. Hence, e λ ( k) for λ = 1, −1 form a basis of the two-dimensional subspace of R 3 orthogonal to k and b λ ( k) are arbitrary coefficients. The coefficients b † λ ( k) are classically the complex conjugates of b λ ( k), but the conjugation is denoted by †, since it will pass to Hermitian conjugation in the quantum theory, so that the operators A (I)i are self-adjoint. The letter b was used to denote the coefficients in order to distinguish them from the creation and annihilation operators for the Dirac field that were introduced already in [14]. A convenient choice of basis e λ is given by It is also convenient to define the matrices which are related by the standard rotation (III.53) When these conventions are adopted, the relation (III.50) can be explicitly inverted with respect to the coefficients where the upper sign applies whenever at least one of the variables is even and the lower corresponds to the case of both variables being odd. The evolution of the Heisenberg and the interaction picture operators can now be written simply as It follows that knowing the commutation relation at t = 0 it is easy to obtain the one that is relevant for t = 0 using Hence, the relation (III.36) leads to x) on account of (III.47), the relation (III.58), together with (III.54), can be used to derive the commutation relations of the operators corresponding to the coefficients b and b † defined by (III.50) and (III.51) (note that I do not use hats b above the annihilation and creation operators). The physical interpretation of b and b † can be inspected in exactly the same way as the meaning of a and a † was established in [14]. Namely, the energymomentum and spin density tensors should be constructed and their commutation relations with b and b † determined. It follows that k ought to be interpreted as the momentum and λ as the helicity of the particle created by b † λ ( k). Note, however, that this interpretation is valid in the asymptotic regions in which the interaction is sufficiently week that the time evolution can be satisfactorily approximated by the free Hamiltonian H 0 . This situation is perfectly relevant for the description of scattering experiments. Similarly, the operators a σ ( p), a c σ ( p) and their conjugates, whose commutation relations, obtained in [14], are should be thought of as describing Dirac particles and their anti-particles in the asymptotic in and out regions. Of curse, the commutators of a's with b's all vanish. This follows from the fact that both A i and π i ⊥ =Ȧ i have vanishing Dirac brackets with spinorial variables (see (III.27) and (III.35)). The Hilbert space carrying a representation of the commutation relations between the interaction picture fields, can now be simply constructed as the Fock space of a, a † , a c , a c † , b, b † . The asymptotic in and out states will inhabit this space. The non-interacting vacuum (i.e. the lowest energy state for the free Hamiltonian H 0 ), from which this Fock space is constructed, satisfies To proceed with this description, I shall use the well known formula for the perturbaive expansion of the S matrix elements (the so called Dyson series): (III.62) (compare with (6.1.1) of [3]). Here 1, 2, · · · denote the incoming particles, 1 ′ , 2 ′ , · · · are outgoing particles (these numbers are assumed to include the information about the momenta, spin projections, helicities and types of particles), T {} is the time ordering, h(x) is the interaction Hamiltonian density in the interaction picture and :: denotes normal ordering operation (i.e. all the annihilation operators occur on the right of the creation operators). Using (III.37) and (III.39), one obtains The interaction density in the interaction picture (III.63) can now be rewritten as and the Dyson series (III.62) for the Compton scattering is The N = 0 term gives Assume now that we wish to calculate the probability amplitude of the scattering event in which the final momenta k ′ , p ′ are at least slightly different than the initial momenta k and p. This assumption is certainly allowable, since it is up to us probability of which we wish to calculate! Under this assumption, the delta functions vanish and one gets no contribution from the N = 0 term. In fact, since the wave packets that describe the incoming beams of particles are usually not ideally localized in the momentum space, the forward scattering term (IV.6) may contribute slightly to the measured values of the nontrivial scattering, but I will not discuss that kind of technical complications here. For N = 1 the two terms corresponding to h A and h C need to be evaluated Note that h C needs to be time-ordered due to its nonlocal character (the time ordering of h A is not necessary). It is easy to verify that the first component vanishes. Hence, the Coulomb component needs to be calculated. However, this term also appears to vanish, which follows from the fact that From (IV.4) it is clear that the first term is proportional to e 2 and the remaining terms are of higher order in e. If we wish to find the correction to the S matrix that is of lowest nontrivial order in the small coupling constant e, we should neglect all the terms but the first. A straightforward calculation then gives and is called the propagator for the Dirac field. Treating the last two terms of (IV.14) in the same way, a similar expression is obtained (note thatλ differs from λ just by the interchange of x i and y j ). Hence, the total contribution to the S matrix that is proportional to e 2 is Hrere u(q) was assumed to be constructed according to the conventions adopted in [13], which are consequently used in this article. In order to calculate λ ln andλ ln explicitly, it is necessary to use To arrive at this result, one should first perform the integrals w.r.t. d 4 x and d 4 y, which will produce delta functions from the exponents of (IV.23). These delta functions can be used in the subsequent integration over d 4 q. One ends up with one delta function that simply expresses the four-momentum conservation. Note that the expressions in the denominators of (IV.24), namely m 2 − (p+ k) 2 = −2p·k and m 2 − (p− k ′ ) 2 = 2p·k ′ , never vanish, because the plane that is perpendicular to a null vector does not contain any time-like vectors. Therefore, the infinitesimal parameter ε can be simply set to zero. Another useful observation is that N (p + k)γ 0 = m+ p+ k and N (p − k ′ )γ 0 = m+ p− k ′ . Using all of these, the e 2 contribution to the S matrix element for Compton scattering can be finally expressed as (2π) 4 This formula agrees with the one that is obtained by standard methods from Feynman rules. It has exactly the same form as the first formula of Chapter 5.5 of [8]. B. e + , e − −→ µ + , µ − scattering Another exemplary calculation will concern the probability amplitude for the production of a pair of muon and anti-muon from scattering of electron and positron. The modification of the Lagrangian that would allow for inclusion of many kinds of fermions is straightforward: any term of the form ψLψ, where L is a matrix-differential operator, ought to be replaced by ψ (r) Lψ (r) , with (r) labeling different kinds of particles. The two terms contributing to the interaction Hamiltonian are now (IV.26) Let a † , a c † denote the creation operators of electron and positron and b, b c the annihilation operators of muon and anti-muon. These operators will have to be supplemented by additional indexes for the momenta and spin projections of the corresponding particles, but I shall skip this labels in the beginning. The N = 1 term in the Dyson series is The vanishing of these two terms follows from the fact that in the first one can commute b c to the right, without producing any non-vanishing anti-commutators, whereas in the second it is possible to commute a c † to the left. The Coulomb part of (IV.27) is, however, nontrivial and reads The remaining terms in (IV.30) are of higher orders in e. The calculations that are similar to those performed in the case of Compton scattering can now be performed, which allow one to recast these expression as and the fact that the step function obeys θ(0) = 1/2, one can finally rewrite the Coulomb contribution as Adding this result to (IV.36) and omitting (2π) 4 δ 4 (k + k ′ − p − p ′ ) we get ultimately This final result is clearly covariant and agrees with other references, e.g. [8]. V. CONCLUSIONS Given a Lagrangian formulation of a field theory, there exists an algorithmic procedure for finding all the constraints, the Hamiltonian and the commutation relations of all the fields with respect to the GDB. If there are first class constraints present, one can try a gauge invariant method of quantization such as BRST quantization (which was not discussed here) or, alternatively, one can eliminate gauge freedom by imposing gauge conditions. A requirement of consistency of gauge conditions with time evolution needs to be imposed and it may lead to additional constraints. If all the gauge freedom is eliminated, the remaining constraints are all second class and should be incorporated in the construction of GDB. Then, in order to pass to quantum theory, one should in principle seek for representation of the final commutation relations of important fields with respect to GDB in a Hilbert space. All these steps were performed for the case of electrodynamics with fermions. The causal structure of space-time was not employed and, indeed, all these steps can still be performed if the electromagnetic interaction is replaced or supplemented by gravity. However, even in the case of electrodynamics, the resulting commutation relations appeared to be rather complicated and no obvious way of representing them in a Hilbert space was visible. In the simplest case of the free Dirac field, considered in the first article [14], it was possible to find explicit solution to the field equations. If the fields are constructed in such a way that they satisfy field equations automatically, then one does not need to bother about representing of their commutation relations with the Hamiltonian. What is more, the remaining commutation conditions between the functions that parametrize the exact solutions were sufficiently simple that the representation could be found for them. On the contrary, in the presence of electrodynamics the equations are to complicated to be solved explicitly. It is a way out of this problem in the case of electrodynamics, since the Hamiltonian decouples into the free and interaction part, the latter being proportional to a very small fine structure constant. This allows for the transition to the interaction picture, in which the equations and commutation relations are sufficiently simple. Then the perturbative quantization can be applied. In the case of gravity, no such obvious decoupling, which would lead to the simple interaction-picture equations, seems to be possible. On the other hand, the non-perturbative field equations are even more complicated then those of electrodynamics and hence the attempts to represent them, without any simplifications, in a Hilbert space seems hopeless. Certainly, one can decouple the metric tensor to the Minkowski part and the deviation, which is commonly done, and quantize only the deviation. Then the interaction picture can be obtained, but the background independence is sacrificed and the resulting theory is non-renormalizable. Hence, these problems are serious, and the lack of Poincaré invariance in the presence of gravity is certainly not an important obstacle.
2010-11-12T16:09:42.000Z
2010-10-27T00:00:00.000
{ "year": 2010, "sha1": "cd02fd8f2db4fc9f3130f457be6fc4a1c7ddd367", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "39a336e67940796db52e82d1036d8d3f57e18737", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
238674429
pes2o/s2orc
v3-fos-license
Hospital-system functionality quantification based on supply–demand relationship under earthquake The hospital system is one of the most critical systems in the city and plays an irreplaceable role in the whole process of earthquake disasters. This paper presents a method that considers the medical supply–demand relationship to quantify the functionality and functional loss of a hospital system under earthquake conditions, which is different from the current quantitative method that only considers internal factors of the hospital system. This method provides a “finest granularity” method for the division of quantitative evaluation units of hospital system functionality based on GIS overlay. Secondly, the functionality of the hospital system considering the medical supply–demand relationship and the quantitative metric, substitution capacity of medical resources (SCMR), is constructed. Then, we propose a quantification method of SCMR by combining the spatial and network analysis methods. Finally, a hospital system in eastern China is considered as an illustrative example. The impact of changes in the medical supply and demand at different times of the day on the hospital system functionality is analyzed. The results show that the medical supply and demand can impact hospital system functionality. The loss of medical supply causes a decline of hospital system functionality, while changes in the spatial aggregation of medical demand positively affect the loss of hospital system functionality. This paper can use the proposed method to quantify the hospital system functionality and reflect the balance of the medical supply–demand relationship before and after the earthquake. It can help decision-maker develop scientific post-earthquake emergency plans and enhance hospital system resilience. Introduction In human social development, many cities have been continuously damaged by natural disasters, which have caused severe casualties and economic losses (Gencer 2013). The hospital system is one of the most critical systems in a city, and its ability to provide services after an earthquake is an essential factor affecting the recovery of urban areas. Hospital system organization and external infrastructure systems generally sustain varying degrees of damage, which may compromise the supply capacity of the hospital system (Fang et al. 2017). Furthermore, the increase in medical demand after the earthquake will exacerbate the imbalance between medical supply and demand, resulting in more serious casualties and losses. Therefore, it is meaningful to study the relationship between changes in medical supply and demand after the earthquake and analyze the impact of changes in medical demand on hospital system functionality. Medical supply capacity can be understood as the performance of the hospital system in daily and disaster situations, and the measures of the hospital system functionality are classified into two aspects: qualitative and quantitative. In qualitative evaluation, some scholars consider hospital system functionality as the performance of hospital system services after an earthquake. World Health Organization (2015) proposed a guideline for evaluating safe hospitals to ensure that hospitals provide sustainable performance after adverse events. This guideline suggests that hospitals can be evaluated in terms of structural safety, non-structural safety, and emergency and disaster management. Achour and Miyajima (2020) considered the hospital performance in the post-earthquake period related to building integrity, lifeline damage, medical equipment, post-earthquake medical services, and generic information on medical facilities. Besides, the hospital functionality is also affected by external networks (telecommunication and road networks) that generally do not prevent hospitals from working, but affect the quality of healthcare services (Achour 2015). At the same time, more and more scholars have begun to study the hospital performance after earthquakes from the perspective of resilience. Post-earthquake functional recovery is widely regarded as a critical index for analyzing the functionality of healthcare. For instance, Zhong et al. (2014a, b) adopted four indicators to evaluate hospital disaster resilience, including emergency medical response capability, disaster management mechanisms, hospital infrastructural safety, and disaster resources. From the perspective of emergency management, cooperation and training management, resources and equipment capability, structure, and organizational operating procedures, were selected by Cimellaro et al. (2018). Fallah-Aliabadi et al. (2020) designed resilience evaluation indicators of medical system from three dimensions: constructive, infrastructural, and administrative. In addition, Qing-xue et al. (2019) proposed a framework for the quantitative assessment of hospital system resilience based on total probability theory. In summary, the evaluation methods of hospital system functions can be roughly classified into structural, non-structural, and post-earthquake emergency response and management. Although methods have gradually matured, subjective factors and limitations have become more and more prominent. In quantitative evaluation, Cimellaro et al. (2010a) defined the functionality of medical facilities according to their service quality (such as waiting time). Yavari et al. (2010) and Mitrani-Reiser et al. (2012) defined the functionality of medical facilities according to the availability of services of the medical facilities. Jacques et al. (2014) linked the functionality of medical facilities with the available space in the hospital to define the functionality of the medical facilities. Bruneau and Reinhorn (2007) used the ratio of the actual number of patients received per day to the number of patients as a functional quantification index and evaluated the state of the hospital system using a value in the range of 0-1. The above approaches focused on the ability of hospitals to cope with medical demand. However, the impact of real medical demand after the earthquake on the functionality of the hospital system was not considered. In addition to the internal factors mentioned above, the external network (telecommunication and road networks) has a strong influence on hospital system functionality (Tariverdi et al. 2019). The WHO guidelines for evaluating safe hospitals mention that the prerequisite for a hospital to operate is access to the hospital. However, even if the hospital is not affected by the earthquake, damage to the external network may limit the operation of hospital system (Kuwata and Takada 2004). In the Great Hanshin Earthquake in Japan, urban transportation system was severely damaged, which brought great difficulties to the post-earthquake emergency rescue of critical infrastructures such as hospitals and firefighting (Tsujie 2001). In the Taiwan Jiji earthquake, many roads and bridges were blocked, causing disorder in the transportation system. In this case, the difficulty of emergency rescue, which can only be implemented by helicopter or on foot, is significantly increased (Ke and Hsu 2022). Due to road interruptions or closures, the ability of hospital systems to receive patients after a disaster has plummeted. Therefore, the impact of external transportation networks on infrastructure in an earthquake is critical. Many scholars have studied the impact of external transportation networks on the functionality of hospital systems. For example, Dong and Frangopol (2017) analyzed the impact of road damage on hospital capacity, using the bridge network as the main external factor. Tamima and Chouinard (2017) identified the accumulation of road debris due to structural damage as a significant obstacle to evacuation and emergency rescue, affecting the function of emergency facilities during an earthquake. To improve the resilience of hospital systems, establishing a rational emergency management policy is also essential. Achour (2015) argued that hospitals can cooperate with other emergency services according to specific needs. In addition, scientific emergency management plans have been established, such as road repair plan (Li and Teo 2019) and medical evacuation plans (Bish et al. 2014). In this paper, the functionality of hospital system refers to the ability to provide pre-hospital emergency services during an earthquake, as indicated by the probability of the injured seeking medical care after the earthquake, which is greatly influenced by the external transportation network. Therefore, this study focuses on the impact of the external transportation network on the hospital system functionality and aims to enhance the hospital system resilience. Post-earthquake medical demand plays a crucial role in the functional analysis of hospital systems. After a devastating earthquake, the medical supply cannot meet post-disaster medical demand, resulting in a serious imbalance between medical supply and demand and exacerbating casualties. Meanwhile, dynamic changes in medical demand can also affect the functionality and losses of hospital systems (Cimellaro et al. 2010b). At present, few studies consider both medical supply and demand factors when assessing hospital system functionality. It is urgent to sort out the dynamic changes of supply and demand in the hospital system and analyze its impact on the functionality of the hospital system. Therefore, this paper presents a method to quantify the hospital system functionality while considering the medical supply-demand relationship. The remainder of this paper is organized as follows: Sect. 2 introduces the quantification methods of hospital system functionality and functionality loss. In Sect. 3, a case study applying the method to a hospital system located in northeastern China, is presented. Section 4 analyzes the results and provides some suggestions for decision-makers. Section 5 is the conclusions of this study. Overview This study presents a functionality quantification method for medical systems based on the medical supply-demand relationship that comprises three steps, as shown in Fig. 1. Firstly, we propose the method of "finest granularity" evaluation unit based on GIS overlay analysis to address the limitations of current research methods. Secondly, a quantitative metric of hospital system functionality, substitution capacity of medical resources (SCMR), which refers to medical supply and demand factors, is provided. The SCMR introduces the supply and demand coefficient based on the cumulative opportunity method, and considers the influence of hospital diversity. Meanwhile, the quantification of medical demand metrics adopts Baidu maps and image processing techniques, which aims to obtain urban population distribution data. Accordingly, an improved earthquake hazard prediction model is Fig. 1 Quantification of hospital system functionality used to quantify medical demand after an earthquake. The quantification of medical supply metrics is achieved by using network analysis of GIS, which can calculate the maximum service range of the hospital and combine it with the number of beds in the hospital. In addition, to improve the accuracy of functionality quantification, we used a Gaussian function and proposed a functionality attenuation coefficient to reflect the variation of hospital functionality with distance. Finally, a weighted summation of the SCMR is used to quantify hospital system function before and after the earthquake. In this study, the postearthquake functional loss of the hospital system takes into account the impact of the loss of medical supply capacity. Division of evaluation units The study area is divided at the finest granularity, results in multiple irregular evaluation units. In urban-scale evaluation research, the division of evaluation units is a crucial step. The amount of data increases with the refinement of the unit division. Therefore, to simplify the calculation process, many scholars express the evaluation unit as a community or administrative region (Cimellaro et al. 2019;Zhou et al. 2020). However, urban grid management has high requirements on the accuracy of urban information scale, and the unit division method can no longer meet the current needs. Therefore, this paper presents a process of dividing the evaluation units at the finest granularity: considering a series of indivisible evaluation units according to the distribution of medical demand and medical supply. The schematic of the division is presented in Fig. 2, where u i is the distribution of medical demand, v j represents the distribution of medical supply, u i v j represent the evaluation unit after segmentation, which corresponds to a unique medical demand and supply. Substitution capacity of medical resources (SCMR) The SCMR is the main quantitative indicator for calculating the functionality of a hospital system. It evaluates the amount of medical resources available to the injured within a limited range and ensures that the hospital system meets post-earthquake medical demands in the event of disruption or loss of hospital functionality, also known as medical resource redundancy. Studies on medical resource redundancy have mainly focused on calculating the number of subjects by the cumulative opportunity. However, this paper adds the supply and demand coefficient to analyze the differences among different hospitals, which reflects the relationship between medical supply and demand. A functionality attenuation coefficient is introduces to show the objective law of supply capacity. The equation for calculating the SCMR is as follows. where A it represents the SCMR of the evaluation unit i at time t, w ij and ij represents the supply-demand coefficient and functionality attenuation coefficient of hospital j for the evaluation unit i . n represents the number of times the evaluation unit is covered by the hospital. M i represents the number of injured in the evaluation unit i , N j indicates the number of beds available in the evaluation unit, which is the sum of the number of available beds in the nearby hospitals. Functionality of urban hospital system In existing studies, the functionality of a hospital system can be assessed in terms of the percentage of healthy population, patient waiting time during a disaster, or the number of reception of patients (Bruneau and Reinhorn 2007;Cimellaro et al. 2010a). Most of studies focus on the hospital system supply capacity and study its seismic performance based on constant demand (Cimellaro and Pique 2016;Khanmohammadi et al. 2018). However, few scholars combined the temporal and spatial characteristics of medical demand with the supply of the hospital system. Therefore, this study considers both medical demand and supply and is characterized by the extent of available medical resources in each evaluation unit to quantify the functionality of urban hospital system. The calculation equation is as follows. where Q(t) represents the service capacity of the hospital system as a function of time t ; A it is the SCMR of evaluation unit i at time t ; i is the rate of the total population in the evaluation unit to the total population, which performs the population share of the evaluation unit; and m is the number of evaluation units within a hospital system. Earthquake injury rate In earthquake damage research, earthquake casualty prediction models are mainly categorized into two types: empirical evaluation methods based on historical data, casualty rate evaluation methods based on building vulnerability. The advantages of the first method are that input variables are fewer and no detailed data are required. However, owing to the limited availability of historical disaster data in various places, the established models are usually a global or national scale, which are not suitable for urban-scale casualty population (Yin 1991;Li et al. 2015). The second method considers the state of building structures in the earthquake, and the calculated result is closer to the actual value (Coburn et al. 1992;So and Spence 2013). However, the second method requires many primary data and a long computation time. Therefore, a semi-empirical method combining empirical and structural analysis is generated to supplement the earthquake prediction suggested by experts, which is more suitable for situations where the primary data are insufficient. Therefore, we choose the semi-empirical method for earthquake damage prediction. We introduced the seismic vulnerability matrix of the structure based on the empirical model to form the semi-empirical model for this study. The number of earthquakes in China is very large worldwide, and earthquakes cause a lot of casualties and property losses every year. There are sufficient empirical data in Chinese cases and studies to derive formulas for building type damage probability matrices (DPMs) ). Therefore, we select the casualty prediction model proposed by Ma and Xie (2000) with population density, building collapse rate, earthquake occurrence time, and earthquake intensity as influencing factors. The equation for earthquake casualty prediction is as follows: where RD is the mortality rate of the population, ND is an estimate of the number of deaths, RB is the probability of building collapse ( RB = collapsed building area/total building area), and M is the total population of the area. f represents the correction factor for population density, and f t denotes the correction factor for the time of earthquake occurrence in the region, which only considers day and night. To address the shortcomings of the above model, this paper improves the population density, earthquake occurrence time and building collapse rate to calculate the injury rate. where RB ′ denotes the improved earthquake building collapse rate, RD and RI are the mortality rate and injury rate of earthquake population, respectively. M ′ denotes the total population of the evaluation unit considering the population density correction factor and the time factor, and NI is the number of injuries in the evaluation unit. In this model, the number of injured is assumed to be three times the number of deaths (Phalkey et al. 2011). The improvements are as follows. (1) Building collapse rate In this paper, we take 500 × 500 m 2 as a grid and calculate the seismic building collapse rate for each grid under an intensity earthquake. The method for gridding the building collapse rate provides more accurate and spatially distributed model results . Studies have shown that masonry structures are more likely to collapse in earthquakes (4) log 10 RD = 9.0 ⋅ RB 0.1 − 10.07 6) log 10 RD = 9.0 ⋅ RB �0.1 − 10.07 than other structures (Bayraktar et al. 2016). Therefore, this paper adopts conservative results and only considers the collapse of masonry structures. The collapse probability of a masonry structure in an earthquake is determined based on the vulnerability matrix (Sun and Zhang 2011), as shown in Table 1. (2) Population density and Earthquake occurrence time The population distribution better characterizes by the population density distribution than the population density correction factor. Thus, we use the population prediction model based on Baidu heat map data Peng et al. 2021), which used the correspondence between heat map color and population density to calculate the population share to obtain the distribution of urban population density. The Baidu Maps heat map has been widely used for urban population distribution research (Ye et al. 2016;Cao et al. 2019). Based on the data of 200 million Baidu Maps users, it collects population density and population flow information in real-time through regional statistics, and uses visualization technology to display the population aggregation state map with "heat index" (Tan et al. 2016). Meanwhile, the real-time update of the heat map data provides more options for the seismic time of the model and increases the timeliness of the model. Number of people injured in the earthquake Equation (10) is a method for calculating the number of people injured in an earthquake, where NI t,tol is the number of injured people at time t in the earthquake, M ′ jt is the number of people in the j evaluation unit at time t , and k is the total number of evaluation units. Medical supply metrics In this study, medical supply was considered as the number of beds available for injured people in the evaluation unit, expressed in terms of hospital service coverage and number of beds. The number of beds considers the hospital level and how different hospitals are combined. Table 1 Vulnerability matrix comparison for seismic design and non-seismic design structures (%) The left side of the oblique presents the seismic damage matrix of the non-seismic designed masonry structures, and the right side is the seismic damage matrix of seismic designed masonry structures. This paper argues that there is no seismic design for buildings before 1978 At the same time, considering that the infrastructure service functionality attenuates with the increase of service distance, the functionality attenuation coefficient is proposed to improve the authenticity of the model. In this paper, we first predict the scope of hospital services and then quantify the medical supply metrics corresponding to the evaluation units. Service range and number of beds Expressing medical supply based on the service range is a criterion for accessibility analysis research (Owen et al. 2010;Verma and Dash 2020). There are two methods to present the medical supply capacity: a circular area with a certain distance as a radius (buffer analysis method) and a polygonal area with a certain topological distance as an endpoint (network analysis method). The first method has a low accuracy rate and cannot accurately reflect the limitation of the road to the service range. In contrast, the second method can better reflect the fundamental laws of human travel. Thus, the second method is selected to determine the topological distance of medical services in each hospital, as shown in Table 2 (Qi et al. 2014;Halder et al. 2020). The construction of a network dataset for the road capacity is the basis for quantifying medical supply. The network dataset comprises the capacity of roads, such as number of roads, length, travel time, and travel speed. Based on the field investigation in the study area, the average speed of vehicles on the roads was set to 25 km/h, and the walking speed was set to 1.5 m/s, while the road capacity was expressed by the calculated road traffic time for each section. Finally, we used network analysis to quantify medical supply according to Table 2. Functionality attenuation coefficient The Gaussian function reveals the law of hospital service functionality attenuation with distance (Luo et al. 2018). The Gaussian function g t j is defined as the attenuation coefficient that varies with the rescue time t, as shown in Eq. (11). In addition, to reduce the difference in the functionality attenuation coefficient within the same time threshold, this study divides the total service time with a time step of 1 min, as shown in Table 3. Functionality loss Loss of functionality is considered to be an instantaneous change in functionality before and after an earthquake. This functional change can be understood as a loss of functionality that ultimately due to building collapse affecting traffic capacity, thereby reducing medical supply while medical demand remains constant. This paper uses ArcGIS buffer analysis to simulate the impact range of building collapse. According to previous studies on the distribution of post-earthquake debris, it can be found that the impact range of debris distribution after the collapse of various buildings generally does not exceed half of its height. Therefore, this paper takes a conservative value, which is 2/3 of the building height (Wang et al. 2020). Then, according to the simulation results, the impact of the collapse on road capacity is analyzed. Finally, based on the medical supply quantification method and road impact analysis, a road capacity reduction coefficient is applied to reduce the traffic speed of roads in the network dataset to decrease the medical supply. The road capacity reduction coefficient is shown in Table 4. Therefore, the calculation of the functionality loss of hospital system is shown in Eq. (12). where L represents the functionality loss of the hospital system and Q(t 0 ) and Q t ′ 0 represent the functionality of the hospital system before and after an earthquake, respectively. Case study The research object of this paper is a city with a seismic fortification intensity of 7 degrees in eastern China. The case in this paper selects an earthquake with an intensity of 8 as the seismic input to quantify the functionality of the hospital system. The city covers an area of 48.32 square kilometers and has a population of approximately 700,000. As of 2021, the district has ten general hospitals (including four tertiary hospitals, two secondary hospitals, and three primary hospitals) and 18 community health service centers with 9584 beds. There are 352 roads and more than 20,000 buildings, which includes 6053 masonry buildings, as shown in Fig. 3. The above data are all from the local urban construction bureau. Table 3 Correspondence table of hospital rescue time threshold and attenuation coefficient Hospital level Time threshold (attenuation coefficient α) Community health center (1) Primary hospital 0-1(0.95), 1-2 (0.77), 2-3 (0.45), 3-3.6 (0.13) Secondary hospital 0-1 (0.96), 1-2 (0.93), 2-3(0.78), 3-4(0.61), 4-5(0.37), 5-6 (0.13) Tertiary hospital 0-1 (0.97), 1-2 (0.95), 2-3 (0.85), 3-4 (0.72), 4-5 (0.55), 5-6 (0.35), 6-7.2 (0.12) Table 4 Reduction coefficient of road capacity in the case of rubble blockage Figure 4 presents the distribution of population density at six time points within a working day. It can be observed that there are differences in the distribution of population density at different times. At 2.00 am population distribution is mainly concentrated in the east. From 6.00 to 10.00 am, the population begins to spread continuously, and the distribution status remained until 2.00 pm. At 6.00 pm, the population is most active and widely distributed. At 10.00 pm, the population gradually concentrates eastward, which is the same as the population distribution characteristics at 2.00 am. It can be seen that the characteristics of the population distribution change over time with significant differences. Figure 5 shows the earthquake population injury rate calculated by the improved semiempirical model. It can be observed that the earthquake population injury rates in the southwest, central, and northeast regions of this area are relatively high, which is consistent with the density distribution trend of masonry structures in this area. Spatial distribution of injured populations We can obtain the distribution of medical demand after the earthquake, as shown in Fig. 6. Overall, the southwest, central, and northeast regions have higher density of injured populations. In terms of time, the number of injured people in different time of the earthquake occurrence presents significant differences, as shown in Fig. 7. When an MS 8.0 earthquake occurred in the study area, the highest medical demand was (7522 people) at 10.00 pm and the lowest was (7213 people) at 2.00 pm. Spatial distribution of medical services The hospital service range is used to represent the medical supply. The data in Table 2 are considered as the time cost of ArcGIS network analysis. The medical supply before and after the earthquake is then quantified based on the network dataset, as shown in Fig. 8. The network dataset of some roads after the earthquake is shown in Table 5, and the road capacity reduction coefficient is added. Similarly, according to Table 3, the distribution of the functionality attenuation coefficients of the hospital system before and after the earthquake was predicted by using network analysis of ArcGIS, as shown in Fig. 9. It can be observed that medical resources in this area before the earthquake were sufficient. However, the service range of hospital system will be highly reduced after the earthquake. The higher the level of the hospital, the more serious the damage will be. The number of injured dimension, the SCMR in urban centers is generally high, which suggests that before and after the earthquake, the injured in urban centers had more hospital options or access to hospitals with more beds than other areas. However, in the aftermath of the earthquake, it can be observed that the damage of the system has a more significant impact on SCMR. The eastern region has a higher redundancy of medical resources, so SCMR remained high after the earthquake. However, SCMR in all other regions fell by more than 50%. According to Eq. (3), the functionality of the hospital system after earthquake was calculated for each earthquake occurrence time. The calculation results were presented in Fig. 12. It can be observed that, before the earthquake, the functionality of the hospital Distribution of SCMR before the earthquake system at different earthquake occurrence times was different, which indicated that changes in medical demand can affect the functionality of the hospital system. At 10:00 am, the hospital had a maximum functionality of 0.8148. At 2:00 am, the hospital has a minimum function of 0.7740. Figure 13 presents the functionality loss at each earthquake occurrence time. In terms of the spatial dimension, changes in demand had a great influence on the functionality loss of the hospital system. The largest loss of functionality was at 2.00 am (0.5140), followed by 10.00 pm (0.5061) and 6.00 am (0.4989). In contrast, the losses at 2.00 pm (0.4775), 6.00 pm (0.4919), and 10.00 am (0.4948) were relatively low. Overall, the functionality loss after the earthquake is generally high, with an average loss of 49.72%. Discussion In this paper, we propose a method to quantify the functionality of the hospital system by considering the relationship between medical supply and demand and analyzing the impact of supply and demand changes on the hospital system functionality. We found that reducing medical supply may lead to serious loss of hospital system functionality while medical demand remains constant. Taking 2:00 am as an example, we selected the southwest area, where red represented the service range of medical supply, and yellow represented the density of injured population, as shown in Fig. 14. It can be seen that a reduction in medical supply leads to a decrease in the functionality of the hospital system. The above situation is Fig. 11 Distribution of SCMR after the earthquake due to the decline of hospital services, so that fewer casualties can be treated with the same number of beds, and many patients cannot be rescued in time after the earthquake. It can be understood that the decline in medical supply makes the number of beds in the evaluation unit not covered by the hospital service range to 0, resulting in the A it of the evaluation unit being 0, so the hospital system functionality declines. As a result, we can increase post-earthquake emergency medical facilities and beds in areas with severely undersupplied medical supply, such as the central, southwest, and northeast of cities, both before and during the disaster. In addition, it can help administrators implement medical resource allocation programs, thereby increasing the redundancy of medical resources. As shown in Fig. 12, when medical supply remains constant and medical demand changes, population mobility causes the population density distribution to change and the number of injuries that can be treated in the hospital service area changes, resulting in changes in the supply and demand coefficients and affecting the functionality of the medical system. In this paper, we conducted a global spatial autocorrelation analysis on the hospital demand distribution to analyze the impact of demand aggregation on hospital system functionality. Figure 15 shows the Moran index of medical demand at each time point. After comparison, it was found that the global Moran index was positively correlated with the functionality loss of the hospital system after the earthquake. The higher the Moran index, the greater the functionality loss of the hospital system. As shown in Fig. 16, the Moran indexes had high aggregation of medical demand at 2.00 am and 22.00 pm, with the greatest functionality loss of hospital system. The Moran indexes had a low medical demand aggregation at 10.00 am and 14.00 pm, and the functionality loss of Fig. 12 Functionality of the hospital system after earthquake Fig. 13 Functionality loss of the hospital system hospital system were the smallest. Therefore, this paper suggests establishing emergency rescue plans for different earthquake times to improve the accuracy of rescue. For example, the medical rescue emergency plan will change with the earthquake time, and the rescue deployment will be carried out according to dynamic demand, and the emergency medical resources will be reasonably allocated. Conclusion In this paper, we investigate existing methods for quantifying the functionality of hospital systems. It can be found that these methods aim to evaluate the impact of internal factors on hospital systems without considering the impact of medical demand. The accuracy of such city-scale evaluations must also improve. Therefore, this paper proposes a method to quantify hospital system function by considering the relationship between medical supply and demand. Finally, a case study is conducted with a hospital system in a local area of a city in eastern China as the research object. The results show that imbalance between medical supply and demand and demand aggregation both lead to the loss of hospital system functionality. The stronger the aggregation of medical demand, the more significant the loss of hospital system functionality. The quantitative method presented in this paper has important timeliness. It can not only quantify the functional loss of the hospital system according to the earthquake time, but also display the service status of the system functionality according to the distribution of the SCMR. It is convenient for hospitals to reserve resources and increase medical facilities before an earthquake. They can also assist emergency management departments to formulate pre-earthquake emergency plans, and provide a basis for post-earthquake emergency rescue decisions. However, there are still some limitations that need to be addressed. Firstly, this paper only considered the influence of medical supply and demand and medical demand on the functionality of the hospital system and did not consider other influencing factors. We will add internal influencing factors of the hospital system in subsequent studies, such as damage to structural components, damage to non-structural components, and functional coupling of departments within the hospital. We will also add external influencing factors, such as emergency management decisions, rescue strategies, and critical infrastructure systems. Secondly, this study proposed a functionality quantification method for hospital systems, adding several limitations. Such as only considering masonry collapse when calculating building collapse rates and building collapse simulations and did not analyze buildings of other structural types. The building collapse range in this paper did not consider the actual collapse of the structure, which used two-thirds of the height as the building collapse influence distance. We will improve the quantification method in further research to improve the accuracy of the method. Finally, this study only analyzed the functionality quantification method of the hospital system through a hypothetical case and did not validate it with actual earthquake disaster data. We will improve and validate the method in subsequent research.
2021-09-27T18:41:07.537Z
2021-08-17T00:00:00.000
{ "year": 2022, "sha1": "ddac2a5da368d97cc0067eea90642abdb6a7632a", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-773117/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "794b5d3bfee626e2db4f374c9a826b2b7ce03404", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Medicine" ], "extfieldsofstudy": [ "Business" ] }
118523170
pes2o/s2orc
v3-fos-license
Minimal Interaction in the Local Landau Theory and Construction of the Phenomenological Ginzburg-Landau Potential in Terms of Electron-Phonon Interaction In the present paper, the method for describing inhomogeneous states with local translational symmetry is proposed, based on the symmetry-dependent interaction between the order parameter (OP) and compensating field in the phenomenological Landau theory. It is shown that dimensionality of the compensating field in the extended derivative is associated with representation of the OP and does not coincide with the dimensionality of the derivative itself. This results in the correct definition of transformational properties of physical fields in the local Landau theory and in the equations of continuum theory of dislocations in the system of equations of state. The mechanism is proposed to construct the Ginzburg-Landau potential for states with local translational symmetry that correspond to HTS states. It is shown that minimal interaction between the superconducting OP and tensor compensating field is responsible for electron-phonon interaction in the BCS model. Introduction Gauge models with minimal interaction in the Landau theory were introduced in the pioneer paper by Ginzburg and Landau [1] and later in the De Gennes' model [2], in describing deformed SmA, and were borrowed from the field theory. In the present paper, we will show that, vice versa, minimal interaction is an integral part of the local Landau theory and is symmetry-dependent. Representations with 0 ≠ k are poorly described in the field theory, whereas subgroups of translations play significant role in Physics. Studies of nontrivial representations of the subgroup of translations provide the basis for crystallography and physics of phase transitions. Unlike field theory where compensating fields are determined by abstract gauge group and do not depend on representation of wave function with respect to the subgroup of translations (trivial representations of the translation subgroup with 0 = k are usually considered in the field theory), here, in the present theory, transformational properties of compensating fields are entirely determined by local transformation properties of OPs. Hence we will show that, for the Ginzburg-Landau model, these local transformational properties are determined by the transformation of wave function under temporal translations: ( An assumption concerning locality of transformational properties of the OP (dependence ) (X k k l l = ) was made in [3]; it is as allowable as the dependence of OP on the coordinates ) (X l l η η = . This is true for the general case that in the inhomogeneous Landau theory, not only the value of OP may depend on coordinates, but the transformational properties of OP also depend on the coordinates (they are determined by the vector k ). Such OP describes an inhomogeneous low-symmetry state where in each macroscopically small region enumerated by the macrocoordinate X a vector k is defined, and in this point a local Landau potential ) (X Φ can be constructed. Obviously dependence of the OP on X in the inhomogeneous models [1,2,4,5] implied that the local Landau potential existed. Otherwise, dependence of the OP on the coordinate would have been impossible since the OP is equivalent to the coefficients in the expansion of density of state into the Fourier series with respect to the coordinate. In fact, the first attempt to examine the model with local transformational properties of OP in the Landau theory was made by de Gennes in [2]. Since the director in the deformed SmА (the single plane normal vector) depends on X : ) (X n n = , then the vector k of the , where d is the distance between the layers in SmA. Compensating Field For models with 0 ≠ k inhomogeneity ) (X k k l l = results in nontrivial transformations of the OP derivative under unit translations [3], [6]. In fact, As it is known [4], the inhomogeneous Landau potential is an invariant function constructed on the OP's basis and its spatial derivatives with respect to symmetry of the high-symmetry phase. It follows from (3) that the translation operator transforms spatial derivatives of the OP, j l X ∂ ∂η , on the space of the OP itself, ) (X l η (the first summand in the right side of expression (3)). Here, components of the OP are eigenfunctions with respect to the translation operators (2) because the subgroup of translations is an Abelian group, and it is known that irreducible representations (IR) of an Abelian group are one-dimensional. To construct invariants of the subgroup of translations which include spatial derivatives, we need first to construct the diagonal basis for the translation operator which includes spatial derivatives of the OP. Let us apply the procedure proposed in the gauge field theory and construct extended derivatives by introducing additional compensating fields in the derivative so that the extended derivatives would be eigenfunctions of the translation operator. Here, according to (3), compensating fields should be determined accurately within the gradient of the vector function. Let us show that this field is a tensor one and it is single. Let us write down the extended derivative as: where the translation operator action on l pj A γ is determined in such a way that the extended derivative (4) could be the eigenfunction of the translation operator: Here, l pj A is the compensating field, and γ is a phenomenological charge. Dimensionality of the compensating field l pj A is associated with dimensionality of the vector l k and is a second-rank tensor (4), since l pj A should undergo transformations like j l p X k ∂ ∂ (5). Vectors in the star } {k are IR-dependent, since they are obtained from one vector k by operations from the point symmetry group. Hence it follows that one tensor field pj A can be chosen which will be compensating for all vectors l k of the IR. For example, for the six-beam basis of the icosahedron [6] ) , the extended derivatives take the form: While constructing the compensating field in (6) Stress and Dislocations States described by OP with local translational properties ) (X k k = , may be given illustrative interpretation: inhomogeneous deformation of crystal lattice, which is generally accompanied by distortions and occurrences of dislocations. Dislocations as linear incompatibilities of lattice occur at the boundaries of crystal regions which have different periods. Let us show that equations of state for the model with ) (X k k = contain equations of the elasticity theory for dislocations. Using (4) we are able to construct a translation-invariant inhomogeneous Landau potential as a function of OP and its extended derivatives. In this model, the introduced tensor compensating field pj A is an independent variable, and variation of the potential with respect to it must be equal to zero. As in electrodynamics as well, we have to take into account the invariants of the tensor compensating field, which take the form of anti-symmetric derivatives with respect to the second index according to (5): Physical interpretation for pj pj A Σ ≡ is associated with the tensor potential of the stress field introduced by Kröner [7] who identified and described the analogy between magnetostatics and continuum theory of dislocations. The definition of the stress tensor (7) is the equilibrium condition for the solid state. Equations of state obtained from the variation of the local Landau potential with respect to the components of the compensating field pj A δ δΦ coincide with the basic equations of the continuum theory of dislocations [8] ) is the tensor of elastic distortion and it follows herefrom, according to (8), that pj pj is the density of dislocations, by definition [8]. In case of the icosahedral star, we will obtain, from (5) [ ] The expression (9) Note that in no expression we specified explicit dependence of the potential on the invariants composed of the components of OP, compensating field and derivatives thereof. To obtain equations of state (8), it was sufficient to construct translational invariants and to require the potential to be a function of those invariants In the Abelian gauge model of the field theory, the change in the scalar phase of the wave function is compensated; the gauge transformation takes the form: The compensating field is here transformed as a vector j j j X eA eA g ∂ ∂ + = α ) ( . As opposed to the gauge model of the field theory (11), in the model (4) a tensor value (5) rather than a vector one was added to the extended derivative. Construction of the extended derivative in the Landau theory is implemented in such a way that the tensor in the extended derivative is transformed together with the OP at transformations from the point symmetry group (6). The compensating field enters the extended derivative not as a contraction of a tensor with respect to the first index, but as a tensor whose first index is transformed together with the vector k of the OP, while the second index is transformed together with the spatial derivatives. Thus in the model under consideration, the extended derivative is not a vector in terms of construction (4), (6). Note that we have arrived at this result by considering nontrivial representations of the subgroup of translations with 0 ≠ k , in the inhomogeneous Landau theory, which are commonly not considered in the field theory due to finite dimensionality of IRs selected [9]. In the De Gennes model [2], in the general case, IRs were infinite-dimensional since at deviations of the director from the principal optic axis ) (X n n = , the vector star } {k of the smectic OP is a surface of a cone. De Gennes made an attempt to construct a phenomenological potential for SmA similar to the Ginzburg-Landau potential in order to describe the effect of screening the stress field by elastic dislocations, similar to the Meissner effect [1]. However, he was selecting the compensating field to be a vector in the extended derivative based on the dimensionality of the derivative itself, and did not check the translational invariance of the constructed smectic potential. It is easily seen that the De Gennes' potential is not invariant with respect to the unit translation operator (3), since the vector field cannot compensate for the changes in the director ) (X n in three dimensions. Vortex structure of tensor equations of state (7) results in the equations similar to London equations [1,10] and in the description of screening effect of the stress field by elastic dislocations. As it is known [2,11], in the De Gennes model, that problem could not be solved because the Franck potential )) ( ( X n Φ = Φ for nematic was used therein to describe elastic properties of SmA which contained non-vortex summands of divergence-of-director type. The present model, containing tensor compensating field in the local Landau theory, is free of the above deficiencies [10]. Ginzburg-Landau Model Let us consider, by analogy with spatial translations, the model with local transformational properties of the OP at temporal translations (1). In this case, the compensating vector field j A changes its sign at the inversion of time, by construction: τ ω τ j j j X eA eA ∂ ∂ + = ) ( (13) (according to (13), the vector field j A is transformed as j X ∂ ∂ω ). In the Abelian field theory, change of the sign of the compensating field in the extended derivative is postulated [9] because, as mentioned above, a scalar phase in (11), which is invariant at temporary transformations, is the group gauge parameter for a model in electrodynamics. Knowing that the electromagnetic vector-potential j A changes its sign at the inversion of time, it is possible to state the contrary as well: such representation of the OP, for which the electromagnetic potential j A is acting as the compensating field, must have local transformational properties at temporary translations (1). Indeed, the extended derivative (12) contains the values that are transformed in different ways at the inversion of time Thus, introduction of the local Abelian gauge group and additional determination of transformational properties of vector-potential at the inversion of time in the field theory [9] is equivalent to the Landau theory where OP has local transformational properties at temporary translations (1). Variation of the local Landau potential with respect to the components of the compensating field is in this case similar to variation of the Ginzburg-Landau potential with respect to the components of the electromagnetic potential and results in the classical Maxwell equations in the system of equations of state [1]. Superconductivity and Electron-Phonon Interaction As it is known [12,13], high-temperature superconductivity (HTS) states are inhomogeneous. However it may be assumed that in the HTS states short-range crystallographic order exists, and hence, a local Landau potential may be constructed. Hence, it is required to construct the Ginzburg-Landau potential for the states with Representation of OP for the HTS state must have local transformational properties both (1) and (2). In this case, the extended derivative will include the linear summands of both tensor [6], and dynamic equations for the free field pj A are equivalent to wave equations for the electromagnetic potential, then it is clear that the tensor pj A corresponds to the phonon potential in the BCS model and is responsible for electron-phonon interaction (16) in the superconducting state. Here, using the term "phonon potential", we imply that the tensor pj A describes elastic properties of the crystal lattice. Its conjugate value, , tensor of density of dislocations, initiates, in the inhomogeneous state, internal stress. Here we identify the perfect analogy to the current and electromagnetic field which is produced thereby. In this model, there is a duality in the choice of definitions of physical fields [14]. It Thus, the phenomenological model (16) agrees with the BCS model and describes not only electromagnetic interaction but electron-phonon interaction as well. The latter being taken into account results in electron pairing, and in the superconducting state where the current does not depend on the internal stress. Internal stress in the present model does not depend on current, but rather is determined by external conditions. Extension of the derivative associated with the introduction of the phonon tensor potential pj A in (16) is symmetry-dependent on the local translational symmetry in the HTS specimens. In states with local translational symmetry, inhomogeneous distribution of electron density results in occurrence of internal stress which is the source of phonons. It should be expected that in such states electron pairing will take place at higher temperature, this actually taking place in HTS states. For ordinary superconductors, electron-phonon interaction should also be present in phenomenological descriptions of inhomogeneous states. It takes into account interaction between current and lattice deformation, but inhomogeneity of state is, in this case, the result of inhomogeneous distribution of the superconducting OP in the external magnetic field. Let us focus on the fact that in order to double the coefficient before the electromagnetic potential in the London equations associated with electron pairing, we did not need to double the charge in the extended derivative and to re-normalize the wave function, as it was done in [15]. Appropriate selection of representation with 0 ≠ k (14) allows settling this problem in the phenomenological theory. Therefore, it follows from the requirement of translational invariance of the local Landau potential that, in describing superconducting states, the Ginzburg-Landau potential must compulsorily take into account electron-phonon interaction, this being in agreement with the BCS theory. Conclusions In the proposed model of the Landau theory, the symmetry group is global (an ordinary subgroup of translations of space-time, probably even a discrete one), whereas the representation is local. In the gauge field theory, the abstract gauge symmetry group itself is local. The notion of local gauge group was introduced to provide substantiation for minimal interaction in electrodynamics. As can be seen from the above considerations, the concept of locality of the transformational properties of OP with respect to the global subgroup of translations of spacetime is a better tool to handle this problem since it entirely defines transformational properties of physical fields and does not require abstract symmetry groups to be introduced. Non-Abelian Young-Mills [16] fields are not suitable as compensating fields in the Landau theory, because they assume their values in the abstract internal space which is not linked to space-time, by definition. Young-Mills fields are not second-rank tensors; hence the equations (7), (8) cannot be obtained in a non-Abelian gauge theory. In the proposed model, tensor compensating fields do not act in the space of OP functions, as Young-Mills fields do, rather, they are transformed together with the OP, or more exactly, with its vector k , and their dimensionality is not associated with dimensionality of the IR.
2019-04-12T22:11:57.915Z
2010-10-09T00:00:00.000
{ "year": 2010, "sha1": "3906c74d5a4b5fbd2d21893c2dbd90db55a46332", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "3b9beeb7c40bd6d5c9701b991d6a633dcff222f8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237100337
pes2o/s2orc
v3-fos-license
Neuropsychiatric Symptoms in Mild Cognitive Impairment and Dementia Due to AD: Relation With Disease Stage and Cognitive Deficits Background: The interaction between neuropsychiatric symptoms, mild cognitive impairment (MCI), and dementia is complex and remains to be elucidated. An additive or multiplicative effect of neuropsychiatric symptoms such as apathy or depression on cognitive decline has been suggested. Unraveling these interactions may allow the development of better prevention and treatment strategies. In the absence of available treatments for neurodegeneration, a timely and adequate identification of neuropsychiatric symptom changes in cognitive decline is highly relevant and can help identify treatment targets. Methods: An existing memory clinic-based research database of 476 individuals with MCI and 978 individuals with dementia due to Alzheimer's disease (AD) was reanalyzed. Neuropsychiatric symptoms were assessed in a prospective fashion using a battery of neuropsychiatric assessment scales: Middelheim Frontality Score, Behavioral Pathology in Alzheimer's Disease Rating Scale (Behave-AD), Cohen-Mansfield Agitation Inventory, Cornell Scale for Depression in Dementia (CSDD), and Geriatric Depression Scale (30 items). We subtyped subjects suffering from dementia as mild, moderate, or severe according to their Mini-Mental State Examination (MMSE) score and compared neuropsychiatric scores across these groups. A group of 126 subjects suffering from AD with a significant cerebrovascular component was examined separately as well. We compared the prevalence, nature, and severity of neuropsychiatric symptoms between subgroups of patients with MCI and dementia due to AD in a cross-sectional analysis. Results: Affective and sleep-related symptoms are common in MCI and remain constant in prevalence and severity across dementia groups. Depressive symptoms as assessed by the CSDD further increase in severe dementia. Most other neuropsychiatric symptoms (such as agitation and activity disturbances) progress in parallel with severity of cognitive decline. There are no significant differences in neuropsychiatric symptoms when comparing “pure” AD to AD with a significant vascular component. Conclusion: Neuropsychiatric symptoms such as frontal lobe symptoms, psychosis, agitation, aggression, and activity disturbances increase as dementia progresses. Affective symptoms such as anxiety and depressive symptoms, however, are more frequent in MCI than mild dementia but otherwise remain stable throughout the cognitive spectrum, except for an increase in CSDD score in severe dementia. There is no difference in neuropsychiatric symptoms when comparing mixed dementia (defined here as AD + significant cerebrovascular disease) to pure AD. Background: The interaction between neuropsychiatric symptoms, mild cognitive impairment (MCI), and dementia is complex and remains to be elucidated. An additive or multiplicative effect of neuropsychiatric symptoms such as apathy or depression on cognitive decline has been suggested. Unraveling these interactions may allow the development of better prevention and treatment strategies. In the absence of available treatments for neurodegeneration, a timely and adequate identification of neuropsychiatric symptom changes in cognitive decline is highly relevant and can help identify treatment targets. Methods: An existing memory clinic-based research database of 476 individuals with MCI and 978 individuals with dementia due to Alzheimer's disease (AD) was reanalyzed. Neuropsychiatric symptoms were assessed in a prospective fashion using a battery of neuropsychiatric assessment scales: Middelheim Frontality Score, Behavioral Pathology in Alzheimer's Disease Rating Scale (Behave-AD), Cohen-Mansfield Agitation Inventory, Cornell Scale for Depression in Dementia (CSDD), and Geriatric Depression Scale (30 items). We subtyped subjects suffering from dementia as mild, moderate, or severe according to their Mini-Mental State Examination (MMSE) score and compared neuropsychiatric scores across these groups. A group of 126 subjects suffering from AD with a significant cerebrovascular component was examined separately as well. We compared the prevalence, nature, and severity of neuropsychiatric symptoms between subgroups of patients with MCI and dementia due to AD in a cross-sectional analysis. Results: Affective and sleep-related symptoms are common in MCI and remain constant in prevalence and severity across dementia groups. Depressive symptoms as assessed by the CSDD further increase in severe dementia. Most other neuropsychiatric symptoms (such as agitation and activity disturbances) progress in parallel with severity of cognitive decline. There are no significant differences in neuropsychiatric symptoms when comparing "pure" AD to AD with a significant vascular component. INTRODUCTION Neuropsychiatric symptoms, often called behavioral and psychological symptoms of dementia (BPSD), are highly common in individuals suffering from Alzheimer's disease (AD) (1). They negatively impact quality of life (2) and are often experienced as more burdensome than cognitive manifestations of disease (3). Furthermore, neuropsychiatric symptoms have been identified as not merely symptomatic and themselves carry pathogenic and prognostic weight in cognitive disorders (4)(5)(6). Unfortunately, standard therapeutic interventions used in other clinical settings, such as psychotherapy (7), have poor efficacy due to cognitive decline, and pharmacological (8) approaches may even be harmful due to side effects, especially in the elderly (9). Nevertheless, accumulating evidence suggests a role for therapeutic interventions in the treatment and prevention of the underlying neurodegeneration itself, apart from their symptomatic effect (10). This underscores the need for the identification of risk factors for BPSD in order to develop new approaches to prevent and treat these symptoms (11). Moreover, recent evidence has suggested that recognition of neuropsychiatric symptoms may increase detection of cognitive decline in a primary care setting (11). Given the ubiquitous nature of these disorders and the important role of prevention and risk prediction, they deserve specific attention. This underscores the need for the identification of risk factors for BPSD in order to develop new approaches to prevent and treat these symptoms (12). However, making targeted therapy and prevention more difficult, multiple symptom manifestations are quite common and clustering in groups of symptoms that frequently overlap (13). These include-but are not limited to-delusions, hallucinations, agitation and irritability, aggressiveness, depression, anxiety, apathy, and sleep disturbance. They may occur during the entire disease course, spanning from the prodromal stage to severe dementia (14). As has been examined by several authors, these symptoms are prevalent in AD (15,16) and dementia in general (17) in multiple settings (e.g., clinical vs. population) (18). The exact causal mechanisms contributing to these symptoms are varied and complex. For instance, they can be caused by functional, neurochemical, and structural brain changes, occurring in neurodegenerative and cerebrovascular disorders leading to dementia such as AD (19)(20)(21), but they may also be impacted by psychological and psychosocial factors as well as premorbid personality traits (22). For a recent discussion of the available high-quality evidence, see Piras et al. (23). Conversely, BPSD have been associated with cognitive decline the other way around (6,15,24). Especially in cases of mild cognitive impairment (MCI), a heterogeneous construct that includes AD, non-AD neurodegenerative brain diseases, and other conditions like depression, cause and effect are often hard to disentangle (25). Although some BPSD symptom clusters tend to become more severe over time (26), some others may decrease (27,28). Other studies have reported more depressive and other behavioral symptoms in cases of dementia with vascular component (20,29,30). The following hypotheses were formulated a priori in this study: (1) depressive symptoms are more prevalent in MCI and (2) patients with dementia due to AD with significant cerebrovascular disease have a different profile with regard to neuropsychiatric symptoms than patients with pure AD. Considering all of the above, it is imperative to further investigate the interactions between neuropsychiatric symptoms and cognitive decline in different stages of AD. Study Cohort The study population consisted of a total of 779 patients with dementia due to AD and 399 patients with MCI, selected from an existing database as described below. Patients were included at the moment of their diagnostic workup for cognitive decline in a tertiary care level memory clinic between 1996 and 2013 in a prospective fashion (31)(32)(33). Study methods are described below. Diagnosis All subjects underwent a general medical and neurological history and physical examination by board-certified neurologists. Standard blood examination and structural neuroimaging (mostly magnetic resonance imaging or computed tomography in case of contraindications for the former) were performed. Probable time since symptom onset was estimated by interviewing the patient's main caregiver and/or legal representative. Any use of psychotropic drugs was thoroughly investigated by subject and caregiver interview. We defined as psychotropic any use of benzodiazepines and z-drugs, chloral hydrate, antidepressants and antipsychotic drugs of all classes/generations, stimulants, cholinesterase inhibitors, and antiparkinsonian drugs including amantadine. A subject not taking any of these substances in the preceding months was defined as free of psychotropic medication. The cognitive evaluation was performed by means of a full neuropsychological examination and a Mini-Mental State Examination (MMSE) (34). The general degree of cognitive decline was ascertained using the Global Deterioration Scale (GDetS) (35). MCI was diagnosed using Petersen's criteria (36): (1) cognitive symptoms, corroborated by an informant; (2) objective cognitive impairment, quantified as a performance of more than 1.5 SD below the appropriate mean on the neuropsychological subtests; (3) largely normal general cognitive functioning; (4) essentially intact activities of daily living (basic and instrumental activities of daily living were determined by an interview with patient and informant); and (5) not demented. Major psychiatric disorders as the cause of cognitive impairment were an exclusion criterion. As all cognitive domains of subjects were tested in an extensive timelinked (±3 months) neuropsychological examination, all MCI patients were categorized as an "amnestic" subtype with memory deficits or a "non-amnestic" subtype with cognitive decline in areas other than memory; cognitive impairment could be present in a "single domain" or in "multiple domains" (37), as described earlier (31). Probable AD was diagnosed by National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (NINCDS/ADRDA) criteria (38), and subjects also fulfilled the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, (DSM-IV) criteria for dementia (39). We defined mixed dementia (MXD) in this cohort as a combination of probable AD and probable or possible vascular dementia (VaD), as diagnosed by the National Institute of Neurological Disorders and Stroke and Association Internationale pour la Recherché et l'Enseignement en Neurosciences (NINDS/AIREN) criteria for the diagnosis of VaD (40). All patients and caregivers who consented were followed up clinically, adding to the diagnostic accuracy of our cohort. Multiple subjects underwent, after prior consent, neuropathological examination following autopsy as described previously (41). We thus obtained several "definite" diagnoses in our cohort. Other specific dementia etiologies [e.g., frontotemporal dementia (FTD), progressive supranuclear palsy, and advanced Parkinson's disease] were diagnosed using the appropriate criteria at the time of diagnosis. These subjects with non-AD dementias were not included in the analyses that this paper reports on. A cohort of 108 age-matched controls was obtained from an earlier study (32). It consists mainly of spouses of cognitively impaired study participants. The MFS is a scale that assesses frontal lobe function and was validated for clinical differentiation between AD and FTD. Information is obtained by interviewing the subject's main caregiver (professional or non-professional) and during an interview of the patient, as well as study of the available clinical files and general behavioral observation. It consists of 10 items to be rated by the clinician or researcher as being either present (1 point) or absent (no point), leading to a total score out of a maximum of 10, as follows: (1) initially comparatively spared memory and spatial abilities; (2) loss of insight and judgment; (3) disinhibition; (4) dietary hyperactivity (referring to overeating); (5) changes in sexual behavior; (6) stereotyped behavior; (7) impaired control of emotions, euphoria, or emotional bluntness; (8) aspontaneity; (9) speech disturbances such as stereotyped phrases, logorrhea, echolalia, mutism, and amimia; and (10) restlessness. A higher score is indicative of more frontal lobe symptoms. The CMAI is a caregiver's rating questionnaire that assesses 29 different agitated and aggressive behaviors. These are scored on a 7-point scale related to frequency (1 = never to 7 = several times an hour). Subsection scores are available for three clusters of items: aggressive behavior (10 items), physical nonaggressive behavior (11 items), and verbal aggression or agitation (eight items). A higher score means more agitated and/or aggressive behavior. The Behave-AD is a clinical rating scale for the assessment of pharmacologically remediable neuropsychiatric symptoms in AD. It consists of 25 individual items rated by a 4-point scale of severity from 0 (absent) to 3 (severely troubling to patient or caregiver). Seven groups of symptoms, often called clusters, are assessed: paranoid and delusional ideation (cluster A), hallucinations (cluster B), activity disturbances (cluster C), aggressiveness (cluster D), diurnal rhythm disturbances (cluster E), affective disturbance (cluster F), and anxieties and phobias (cluster G). The "total" score is the sum of all these cluster scores, while the "global" score denotes the impact of behavioral symptoms on caregiver well-being and/or patient safety taken as a whole. A higher score means more troubling neuropsychiatric symptoms. The CSDD was developed to assess signs and symptoms of major depression in patients with dementia based on an interview with an informant and an interview with the patient. The scale consists of 19 items that are rated as 0 (absent), 1 (present), or 2 (severe). These items focus on five aspects of the depressive syndrome: (A) mood-related signs, (B) behavioral disturbance, (C) physical signs, (D) cyclic functions, and (E) ideational disturbance. A higher score means more depressive symptoms, with a cutoff of 6 generally valid as reflecting a psychiatrist-ascertained diagnosis of depression (47). The GDS-30 is a self-rating screening instrument for depression in the elderly consisting of 30 yes-no questions on various depressive signs and symptoms. A score of 11 or higher implies mild depression; more than 20, severe depression (46). Despite being created for cognitively healthy older adults, GDS-30 retains its validity in MCI and mild dementia (48). Ethics Data collection started after approval of the study protocol by the local ethics committees of the University of Antwerp and Hospital Network Antwerp (ZNA). All subjects or their legal representatives provided written informed consent for participation in this study. Statistical Analysis The dementia population was stratified by dementia severity in three subgroups by the total MMSE score, with scores equal to or higher than 22 and 12, respectively, implying mild and moderate dementia and scores between 0 and 11 indicating severe dementia based on a paper by Perneczky et al. (49) correlating MMSE score with general dementia severity. Medication use and gender were compared using chisquare tests both across diagnoses and across severity groups. Other comparisons were obtained using an analysis of variance (ANOVA) with a least significant difference (LSD) post-hoc test. All data were analyzed using SPSS 26 (IBM, Statistical Package for the Social Sciences, Chicago, IL, USA). The significance level was set at p < 0.05, two-tailed, for all analyses. In the data presented below, we did not correct for repeated measures, although all significant differences remained statistically significant following Bonferroni correction. Demographics There are significantly fewer female subjects in the MCI group as compared to the dementia groups. There is a significant (although slight) increase in mean age across groups in parallel with disease severity. There is an expected drop in MMSE score and rise in GDetS across groups. MCI patients use significantly less psychoactive medication than all other groups, as do mild dementia patients as opposed to moderate and severe cases. Results are summarized in Supplementary Table 1. Subjects with MXD are significantly older than those with dementia due to AD but do not differ otherwise in terms of demographics. Supplementary Table 2 illustrates these findings. Across Severity Groups Findings are summarized in Supplementary Table 3. Figure 1 visualizes the presence of symptoms on the Behave-AD clusters across groups. Figure 2 lists the prevalence of significant depressive symptoms on GDS-30 and CSDD scales. Since no accepted cutoff values for "relevant" symptoms for CMAI and Results across severity categories are visually represented in Figures 3-7, with accolades indicating a p-value of <0.001 and dotted accolades representing p < 0.05. Separate Behave-AD cluster scores are available in the Supplementary Materials. Several signs and symptoms become progressively more severe when cross-sectionally (and not longitudinally!) comparing advanced disease stages to earlier ones, as described in what follows. Frontal lobe symptoms as assessed by the MFS do not differ between mild and moderate dementia. They are more severe in mild dementia compared to MCI, as well as moderate and mild dementia compared to severe dementia. Paranoid and delusional ideations as assessed by Behave-AD cluster A do not differ significantly between moderate and severe dementia but otherwise increase in parallel with increasing severity of cognitive impairment when comparing groups. Hallucinations as assessed by Behave-AD cluster B are significantly worse in the severe dementia stage in comparison with all other groups. They are also more common in moderate dementia when compared to MCI patients. Activity disturbances as assessed by Behave-AD cluster C do not differ in MCI subjects compared to mild dementia. They are more common, however, in moderate dementia and are worse in severe dementia. Aggressiveness as assessed by Behave-AD cluster D does not differ in MCI subjects compared to mild dementia either. They are more common in moderate dementia and very prevalent in severe dementia. The sum of BPSD symptoms as assessed by the Behave-AD total score does not differ significantly between MCI and mild dementia. They do increase in cases of moderate dementia and again in the severe dementia group. The global burden of BPSD as measured by the Behave-AD global score significantly increases when comparing subjects between disease stages. Agitation and aggressiveness as measured by the CMAI and its subclusters do not differ significantly in MCI subjects as compared to mild dementia. They are significantly more common in moderate dementia and even more so in severe dementia. Depressive symptoms as assessed by the CSDD are significantly worse in severe dementia as compared to other groups. Relevant depressive symptoms in CSDD rating (a score of 5 or more) are also more frequent in moderate dementia as compared to MCI and mild dementia. Some symptoms show a different distribution across severity stages. Note that these are mainly affective symptoms, as described in the following section. Diurnal rhythm disturbances as assessed by Behave-AD cluster E are more frequent in MCI as compared to mild dementia and similar to moderate dementia. They are slightly worse in severe dementia. Affective symptoms as assessed by Behave-AD cluster F do not differ significantly across groups. Anxiety and phobias as assessed by Behave-AD cluster G are more frequent in MCI as compared to mild dementia but do not differ across the other groups. Depressive symptoms as assessed by GDS-30 trend toward significance when comparing frequency across groups. The presence of relevant symptoms (defined as a score >10) does not differ significantly across groups. Given the possible effects of gender on the prevalence and severity of neuropsychiatric symptoms (50,51), we reran all analyses on male-only and female-only cohorts. This did not cause any relevant change in our results. We also evaluated the interaction between gender and neuropsychiatric symptom scores according to severity of cognitive decline. These interactions were not significant (data not shown). Comparing "Pure" Alzheimer's Disease to Mixed Dementia (Vascular Dementia + Alzheimer's Disease) Supplementary Table 4. There are slightly more diurnal rhythm disturbances in MXD as compared to AD using the Behave-AD cluster E. MXD patients exhibit more severe depressive symptoms when evaluated using the GDS-30 but not when using the CSDD or Behave-AD cluster F. As the GDS-30 loses validity in severe dementia, the significant difference based on GDS-30 should be interpreted with caution. Depression and Anxiety as Related to Severity of Cognitive Decline Our results indicate that affective symptoms and anxiety are common in MCI, more so than in mild dementia. Depressive symptoms further increase in moderate and severe dementia (Supplementary Table 3; Figures 2, 6, 7). Depressive symptoms are worse in patients with severe dementia compared to earlier stages of cognitive decline. So does the percentage of subjects with clinically relevant depressive symptoms [defined as a score of 6 or more on the CSDD (47)]. This cutoff is reached by more subjects in the moderate dementia group compared to MCI and mild dementia (Supplementary Table 3-MCI, 32%; mild dementia, 30%; moderate dementia, 38.1%; and severe dementia, 53.6%). These findings suggest that depressive symptoms increase in parallel with the evolution of dementia, discrediting the hypothesis that affective symptoms are common mainly in prodromal and early phases of cognitive decline and decrease later on (27,(52)(53)(54). We should note that this progression of depressive symptoms is less significant when using the GDS-30 scale (Supplementary Table 3; Figure 7). This is probably because of the language-based and metacognitive items included in this self-rated scale, which are difficult to obtain in patients with severe dementia who often lose these cognitive abilitiescomprising validity (48). The CSDD was created for use in subjects with (advanced) dementia and therefore has less of a focus on cognitive/affective symptoms compared to physical or behavioral signs and symptoms (47). In addition to these differences in rating scales, several types of depressive symptoms often coincide throughout the spectrum of cognitive impairment (25,55,56). For example, motivational symptoms may overlap with apathy (57,58) or sedative side effects of psychotropic drugs (9,59). Vegetative symptoms (60) of depression such as weight loss or other bodily upsets may mimic common somatic issues in the elderly, and vice versa (61). Mood disturbances may also be reactional to a diagnosis of cognitive decline [although this is certainly not a universal reaction (62)], being hospitalized and/or moving to a nursing home, etc. All of these may influence the results and interpretation of rating scales assessing several physical (more prominent in the CSDD) and mental (more prominent in the GDS-30) symptoms associated with a depressive syndrome. There are no differences when using the F cluster of Behave-AD, which consists of an item evaluating tearfulness and an item concerning suicidal thoughts or actions. These may be insensitive as sole markers of depressive symptoms when compared to scales with items assessing more aspects of the depressive syndrome as mentioned above (63,64). We found anxiety to be more frequent in MCI patients as compared to advancing stages of dementia. In combination with a relatively high burden of depressive symptoms compared to other neuropsychiatric symptoms (which are rare in MCI), this may reflect the presence of psychiatric disorders as comorbid with MCI (65,66). Indeed, as major psychiatric disorders as cause of cognitive impairment were an exclusion criterion and as the clinical-diagnostic evaluation was made by a multidisciplinary team consisting of-among othersexperienced cognitive neurologists and neuropsychologists, it is unlikely that subjects suffering from major psychiatric disorders (such as clinical depression or anxiety disorders) would have been included. Nevertheless, distinguishing new neuropsychiatric symptoms of degenerative brain disease from preexisting (mild or subsyndromal) psychiatric issues remains challenging (67). Additionally, MCI patients undergoing a diagnostic process of cognitive decline may experience worries about the future (68,69). As discussed earlier, some authors have suggested that these symptoms may be prodromal to degenerative disease (25,70,71) and may wane over time as the underlying brain disease progresses (27). We could not confirm a strongly decreasing trajectory of any neuropsychiatric symptom in this study. Other Neuropsychiatric Symptoms as Related to Severity of Cognitive Decline Our results further demonstrate that there is a gradual progression of most other neuropsychiatric symptoms in parallel with the severity of cognitive decline. This is the case for frontal lobe symptoms, delusions and hallucinations (i.e., psychotic symptoms), activity disturbance, aggressiveness, agitation, and general neuropsychiatric symptoms as measured by the Behave-AD clusters as well as by the Behave-AD total and global scores. Most symptoms are present in a stage of moderate dementia and increase in prevalence and severity in severe dementia. This confirms earlier research suggesting an increasing prevalence and severity of neuropsychiatric symptoms with cognitive decline in AD (41,42,72,73). Of note, some studies have found a decreasing or stable burden in advanced disease (53, 54) which we could not confirm for any of the measures we used. Furthermore, not all neuropsychiatric symptoms linearly increase in frequency or severity throughout the spectrum of cognitive decline. Some neuropsychiatric changes such as depressive symptoms, anxiety, and sleep disruption are present in MCI (Supplementary Table 3). Although we did not statistically compare BPSD data of MCI and dementia patients with healthy controls in this study, our historical control cohort demonstrates that these symptoms are rare in cognitively healthy aging (Supplementary Table 3, column e) (32). In this large study, psychotropic medication is used a lot more by the dementia groups compared to MCI, even when considering the possible presence of primary or reactive psychiatric disorders among subjects with MCI. In our cohort, only 30% of AD dementia patients were free of psychotropic medication (Supplementary Table 2). One explanation could be that polypharmacy is highly common among nursing home residents with and without dementia (74). Furthermore, subjects undergoing diagnostic evaluation and/or hospitalization related to dementia or delirium are frequently prescribed psychotropic medication (75), despite limited or ambiguous evidence of shortand middle-term efficacy of, for example, antidepressants (76,77). Nevertheless, some studies have revealed a long-term role for these drugs, implying decreased risk of further cognitive decline under treatment (78). Many studies, however, did not find such an effect (79). As we await stronger evidence, efforts are ongoing to decrease prescribing psychotropic agents that are not stringently indicated (80,81). Psychotropic drugs may influence behavioral scores. For example, use of benzodiazepines and other sedatives may also mask symptoms of anxiety in subjects in more advanced disease stages. On the other hand, prescription of psychotropics has been linked to increasing care dependence in subjects with dementia (82). A similar explanation may underlie the presence of sleep symptoms as assessed by the Behave-AD section E (consisting of the options: no symptoms, repetitive awakenings, loss of 50-75% of nighttime sleep, total loss of night-time sleep/reversal of day-night rhythm), since both depression and anxiety are associated with fragmented sleep and the use of sedative drugs may mask these symptoms. Separately examining subjects with and without use of psychotropic medication in our cohort did not alter our general results concerning differences in mean burden of neuropsychiatric symptoms between groups according to severity of cognitive decline (data not shown). This may reflect an effect of pharmacological treatment on these symptoms or the heterogeneity of these subjects, since several kinds of psychotropic drugs were considered together. As mentioned above, the effect of gender on all of the above was minimal in our cohort after separate analysis of male and female subjects as well as an analysis of interaction between gender, neuropsychiatric symptom scores, and severity of cognitive decline. This did not yield any significant results (not shown here). Summarizing, our results indicate that most behavioral symptoms such as psychosis, aggression, and activity disturbance increase linearly with advancing dementia. We further demonstrate that affective symptoms are frequent in MCI and seem to remain stable throughout the course of cognitive decline in AD. Depressive symptoms are common and increase with the severity of dementia. Neuropsychiatric Symptoms as Related to Dementia Diagnoses (Alzheimer's Disease vs. Mixed Dementia) We did not observe significant differences between the AD and MXD groups in our cohort, barring a slightly higher incidence of (relevant) depressive symptoms as assessed by the GDS-30, which was not reproduced using other measures of depressive symptoms. This may be due to our definition of MXD, which is probable AD in combination with significant cerebrovascular disease. Differences with cohorts comprising "pure" VaD may be higher, as has been suggested in research on late-life depressive symptoms and vascular disease (19,20,30). Although not all studies have confirmed this link (83), systematic literature review does suggest such a relation (84). Causal mechanisms, however, remain controversial. They may include structural damage to neural networks (30,85) or a shared inflammatory pathogenesis (86). Much research in this focuses on "pure" vascular/subcortical dementia, focusing, for example, on white matter lesions (87)(88)(89). Much fewer studies have explicitly evaluated neuropsychiatric profiles of MXD vs. AD (90,91). As mentioned before, certain depression scales such as the GDS-30 lose validity in severe dementia (48), which may bias our findings. Nevertheless, an absence of significant differences between AD and MXD subjects remained even when removing cases of severe dementia from the analysis (not shown here). Our study's results therefore suggest that considering "pure" AD together with instances of AD with significant cerebrovascular disease is justifiable when studying neuropsychiatric symptoms. Limitations Like all diagnostic instruments, the rating scales used in our study have intrinsic limitations. It has, for example, been suggested that the CMAI is prone to proxy reporting bias due to the highly distressing nature of these symptoms for caregivers (92). We discussed the possible limitations of the GDS-30 in severe dementia earlier. An important limitation of this study is the absence of apathy measures. Despite phenomenological overlap with depression concerning, for example, loss of interest or motivation (93), apathy is a distinct neuropsychiatric syndrome (94). It has been increasingly recognized as an important predictor of dementia in both community-dwelling (95) and clinical settings-with and without concurrent depressive symptoms (96)(97)(98). Apathy has furthermore been associated with important reduction of quality of life in patients as well as caregivers (99,100). Additionally, our study evaluated depressive symptoms in a cross-sectional fashion. Although a thorough medical history including psychiatric disorders (including depression) was obtained from all participants, we did not systematically assess the frequency and severity of past depressive episodes or duration of current affective symptoms. It has been suggested that newonset or worsening depressive symptoms in the elderly are related to cognitive decline (4). A lifetime history of depression has also been implicated in dementia risk (25). This lack of temporal information on affective symptoms is a limitation, since it is known to be of diagnostic and prognostic importance. It has been argued that new and increasing affective symptoms especially increase the risk of dementia (25,101,102). This a limitation of this study and must caution possible conclusions. Notwithstanding the prospective nature of our neuropsychiatric assessment, it has been reported that memory clinic cohorts have more severe neuropsychiatric symptoms (103,104) as opposed to individuals not seeking medical attention. The presence of these symptoms may have caused subjects or their caregivers to seek professional help earlier, which could have caused an overestimation of the prevalence of these symptoms, resulting in a selection bias. Our results should therefore be extrapolated to non-clinical populations with caution. We did not analyze marital status, estimated duration of cognitive symptoms, and level of education. A more thorough subtyping of the impact of specific classes of psychotropic drugs on our findings was not done. We considered the use of several psychotropic drugs together in a binary fashion; this may explain the lack of impact on our findings as mentioned above. Lastly, we did not evaluate the role of imaging, genetic, or biochemical [i.e., cerebrospinal fluid (CSF) biomarker] data in this study. CONCLUSION Depressive symptoms are prevalent in MCI. They increase in severity and prevalence in moderate dementia and in severe dementia. Anxiety is frequent in MCI and remains roughly stable throughout the cognitive spectrum. Frontal lobe symptoms, psychosis, agitation, and activity disturbance worsen linearly as cognition declines. There is no clear difference in neuropsychiatric symptoms when comparing pure AD to mixed vascular-AD dementia. Neuropsychiatric symptoms are highly common in moderate and severe dementia despite frequent pharmacotherapy, demonstrating a clear need for new therapeutic options for these incapacitating symptoms. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethical Committees of University of Antwerp and Hospital Network Antwerp (ZNA). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS WW, CB, and SE conceived the idea for this manuscript. WW performed the database analysis with help from MW and DZ. WW wrote the first drafts. MW, DZ, CB, and SE critically reviewed and commented on these drafts. All authors read and approved the submitted version.
2021-08-17T13:27:20.830Z
2021-08-17T00:00:00.000
{ "year": 2021, "sha1": "e9a2d0348b339887182ca23c501c25db23efbee3", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyt.2021.707580/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9a2d0348b339887182ca23c501c25db23efbee3", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119217802
pes2o/s2orc
v3-fos-license
Holographic dark energy linearly interacting with dark matter We investigate a spatially flat Friedmann-Robertson-Walker (FRW) cosmological model with cold dark matter coupled to a modified holographic Ricci dark energy through a general interaction term linear in the energy densities of dark matter and dark energy, the total energy density and its derivative. Using the statistical method of $\chi^2$-function for the Hubble data, we obtain $H_0=73.6$km/sMpc, $\omega_s=-0.842$ for the asymptotic equation of state and $ z_{acc}= 0.89 $. The estimated values of $\Omega_{c0}$ which fulfill the current observational bounds corresponds to a dark energy density varying in the range $0.25R<\ro_x<0.27R$. I. INTRODUCTION Many different observational sources such as the Supernovae Ia 2 -3 , the large scale structure from the Sloan Digital Sky survey 4 and the cosmic microwave background anisotropies 5 have corroborated that our universe is currently undergoing an accelerated phase. The cause of this behavior has been attributed to a mysterious component called dark energy and several candidates have been proposed to fulfill this role. For example, a positive cosmological constant Λ, explains very well the accelerated behavior but it has a deep mismatch with the theoretical value predicted by the quantum field theory. Another issue of debate refers to the coincidence problem, namely: why the dark energy and dark matter energy densities happen to be of the same order precisely today. In order to overcome both problems, it has proposed a dynamical framework in which the dark energy varies with the cosmic time. This proposal has led to a great variety of dark energy models such as quintessence 6 , exotic quintessence 7 , N-quintom 8 and the holographic dark energy (HDE) models 9 based in an application of the holographic principle to the cosmology. According to this principle, the entropy of a system does not scale with its volume but with its surface area and so in cosmological context will set an upper bound on the entropy of the universe 10 . It has been suggested 11 that in quantum field theory a short distance cut-off is related to a long distance cut-off (infra-red cut-off L) due to the limit set by the formation of a black hole. Further, if the quantum zero-point energy density caused by a short distance cut-off is taken as the dark energy density in a region of size L, it should not exceed black hole mass of the same size, so ρ Λ = 3c 2 M 2 P L −2 , where c is a numerical factor. In the cosmological context, the size L is usually taken as the large scale of the universe, thus Hubble horizon, particle horizon, event horizon or generalized IR cutoff. a) chimento@df.uba.ar b) monicaforte@fibertel.com.ar c) martin@df.uba.ar Among all the interesting holographic dark energy models proposed so far, here we focus our attention on a modified version of the well known Ricci scalar cutoff 12 . Besides, there could be a hidden non-gravitational coupling between the dark matter and dark energy without violating current observational constraints and thus it is interesting to develop ways of testing an interaction in the dark sector. Interaction within the dark sector has been studied mainly as a mechanism to solve the coincidence problem. We will consider an exchange of energy or interaction between dark matter and dark energy which is a linear combination of the dark energy density ρ x , total energy density ρ, dark matter energy density ρ c , and the first derivate of the total energy density ρ ′ . 13 II. THE INTERACTING MODEL In a FRW background, the Einstein equation for a model of cold dark matter of energy density ρ c and modified holographic Ricci dark energy having energy density ρ x = 2Ḣ + 3αH 2 /∆, reads where α, β are constants and ∆ = α − β. In terms of the variable η = 3 ln(a/a 0 ), the compatibility between the global conservation equation and the equation deduced from the expression of the modified holographic Ricci dark energy namely, (ρ c + γ x ρ x ) = (αρ c + βρ x ), gives a relation between the equation of state of the dark energy component ω x = γ x − 1 and the ratio r = ρ c /ρ x Solving the system of equations (1) and (3) we get ρ c and ρ x in terms of ρ and ρ ′ as The interaction between both dark components is introduced through the term Q by splitting the Eq.(3) into ρ ′ c + αρ c = −Q and ρ ′ x + βρ x = Q. Then, differentiating ρ c or ρ x in (5) and using the expression of Q we obtain a second order differential equation for the total energy density ρ 13 For a given interaction Q, solving Eq. (6) gives us the total energy density ρ and the energy densities ρ c and ρ x after using Eq. (5). The general linear interaction Q 13 , linear in ρ c , ρ x , ρ, and ρ ′ , can be written as where γ s is constant and the coefficients c i fulfill the condition c 1 + c 2 + c 3 + c 4 = 1 13 . Now, using Eqs. (5) we rewrite the interaction (7) as a linear combination of ρ and ρ ′ , Replacing the interaction (8) into the source equation (6), we obtain where the roots of the characteristic polynomial associated with the second order linear differential equation (9) are γ s and γ + = (βα − u)/γ s . In what follows, we adopt γ + = 1 for mimicking the dust-like behavior of the universe at early times. In that case, the general solution of Interestingly, Eqs. (10) tell us that the interaction (8) seems to be a good candidate for alleviating the cosmic coincidence problem because the ratio Ω c /Ω x becomes bounded for all times. Let us consider the particular case in which the interaction Q is proportional to the energy density of the dark matter ρ c in such a way that ρ ′ c +ρ c = ρ ′ x +(1+ω x )ρ x = 0. That is, each fluid separately, satisfies an equation of conservation. Here the constants u and γ s defined above, correspond to u = β(α − 1) and γ s = β with which the expressions (10) for the energy densities of the dark matter and dark energy are written as functions of the redshift as The ratio between both components r = ρ c /ρ x turns out to be and shows that in the early universe, both components behave as dust. In the final stages the ratio tends to zero and therefore it does not solve the problem of the coincidence. This example of interaction, between nonrelativistic dark matter and the modified holographic Ricci dark energy, is important because allows to show that the holographic forms of the dark energy are always interacting with the non-holographic component. This behavior can be observed in the equation (11b) and is due to the functional dependence of the holographic equation of state with the ratio r of the energy densities, each time α is different from 1. III. OBSERVATIONAL CONSTRAINTS The transition redshift z acc that satisfies the equatioṅ H + H 2 = 0 and the actual Hubble factor H 0 allow us to express the coefficients b i in equations (10) ] so that the Hubble function, reads (1 + z) 3γs (1 + z acc ) 3γs + (2 − 3γ s ) We apply the χ 2 -statistical method to the Hubble data 14 for constraining the cosmological parameters of the Hubble function (14). The three-dimensional confidence regions 1σ and 2σ are shown in the left panel of Fig.1 where the sphere indicates the best fit values z acc = 0.89, H 0 = 73.6km/sMpc and γ s = 0.158 with a minimum value of the χ 2 function per degree of freedom Interesting in the sense that we now know the values z acc and H 0 of the current Hubble parameter predicted by our model, nevertheless this information does not allow to determine the values of α and β best fitted to the observational data. We must bear in mind that a feasible model of the dark sector has dark components with positive definite energy densities, accelerated expansion and non phantom dark energy. These requirements are fulfilled when b 1 and b 2 are positive constants, which correspond to α ≥ 1 and 0 ≤ β < 2/3. To determine the most acceptable ranges of the parameters α and β we note that this constants are involved in the expressions of the partial energy densities ρ c and ρ x and so, we apply the χ 2 -statistical method as above but now using the expressions , Ω c0 + Ω x0 = 1. The results of this procedure, included in Table I, show that the holographic case α = 4/3 and β = 1 has a very poor statistical adjustment χ 2 dof = 22.23, whereas inversely, the models with α = 4/3 and β < 0.1, that is 0.25R < ρ x < 0.27R, behave reasonably well leading to χ 2 dof = 0.761 < 1. of expressions, as they are written in terms of z acc and H 0 , used in (14), or in terms of the current parameters of density Ω c0 and Ω x0 , as used in (15). These sets allow to express the constant ω 0 as and verify that the first line of the Table I gives the correct values of α and β. In the next subsections, we will used these values α = 1.15, β = 0.01 and Ω c0 = 0.3 in the figures and expressions for the partial densities and their ratio. A. The crisis of the age The age of the universe in units of H −1 0 can be obtained as a function of the redshift z with the expression We depict this age-redshift relation in the right panel of Fig.1. The parametric curve of cosmological time t(z) is drawn from (17) in units of H −1 0 for the best values z acc = 0.89, H 0 = 73.6km/sMpc and γ s = 0.158. Because the cosmological constraints with the Hubble data only cover redshifts over the range 0 ≤ z < 2, the comparison with cosmic milestones will be trustworthy in this range only, and for that reason we consider only two old stellar sources such as the 4 Gyr old galaxy LBDS 53W069 at redshift z = 1.43 15 and the 3.5 Gyr old galaxy LBDS 53W091 at redshift z = 1.55 16 . We find that at low redshift z < 2, the Ricci-like holographic dark energy model seems to be free from the cosmic-age problem, namely, the universe cannot be younger than its constituents. B. The magnitude-redshift relation It is well known that observations of type Ia supernova(SNe Ia) have predicted and confirmed that our universe is passing through an accelerated phase of expansion. Since then, the observational data coming from these standard candles have been taken seriously. It is commonly believed that measuring both, their redshifts and apparent peak flux, gives a direct measurement of their luminosity distances and thus SNe Ia data provides the strongest constraint on the cosmological parameters. The theoretical distance modulus is defined as µ(z) = 5 log 10 D L + µ 0 where µ 0 = 43.028, and D L is the Hubble-free luminosity distance, which for a spatially flat universe can be recast as Using the best fit values of ω s and z acc in Eqs.(14)- (19) we get the theoretical distance modulus µ(z) that we draw in the left panel of Fig.2 together with the observational data µ obs (z i ) 18 and their error bars. The theoretical distance modulus (18) will strongly depend on the model used so taking into account a particular cosmology and comparing its µ(z) with µ obs (z i ) one can judge the plausibility of the cosmological model. As we see from There are magnitudes that do not depend explicitly on the pair of constants (α, β) which selects one particular form for the energy density of the dark energy ρ x = (2Ḣ + 3αH 2 )/(α − β), but on the linear combination ω 0 defined in (15). These are: the total energy ρ, the deceleration parameter q and the global equation of state ω, whose explicit expressions can be written in terms of the transition redshift z acc and the asymptotic equation of state ω s , by (14) and the functions with ω 0 given by (16). In the right panel of Fig There we can see that the deceleration parameter of our models vanishes near z acc = 0.84, so these universes enter in the accelerated phase more earlier than the ΛCDM model with actual density parameters Ω c0 = 0.3 and Ω x0 = 0.7. The effective equation of state ω, is plotted in the right panel of Fig.3 and looking there, we conclude that our models have −1 < ω(z) < 0 in the interval z ≥ 0. More precisely, ω(z) begins like non-relativistic matter, decreases rapidly around z = 2 and ends with the asymptotic value ω s = −0.84. Instead, the density parameters Ω c = ρ c /3H 2 and Ω x = ρ x /3H 2 , their ratio r = Ω c /Ω x , the equation of state for the dark energy ω x of Eq.(4) and also the interaction used Q of Eq.(8), are described explicitly in terms of α and β by the expressions The density parameters Ω c and Ω x and their ratio r(z) are plotted in the left panel of Fig.3 for the best values α = 1.15, β = 0.01, ω 0 = −0.65 and ω s = −0.84, where we can see that the general linear interaction Q helps to alleviate the coincidence problem. The later is drawn in the right panel of Fig.3 together with the dark energy equation of state ω x which has −1 < ω x (z) < 0 for z ≥ 0. The linear interaction Q of Eq.(26) corresponds to the choice u = αβ − 1 − ω s in the Eq.(8) and its curve is always negative satisfying the second law of thermodynamics that requires the energy flow goes from dark energy to dark matter 17 . IV. CONCLUSIONS We have examined a modified holographic Ricci dark energy coupled with cold dark matter and found that this scenario describes satisfactorily the behavior of the energy densities of both dark components alleviating the problem of the cosmic coincidence. We have shown that the compatibility between the modified and the global conservation equations restricts the equation of state of the dark energy component relating it to the ratio of energy densities. This constrain makes the holographic density always interacts with the non-holographic component except in the unlikely event that α = 1, which is forbidden for positive energy densities. From the observational point of view we have obtained the best fit values of the cosmological parameters z acc = 0.89, H 0 = 73.6km/sMpc and γ s = 0.158 with a χ 2 dof = 0.761 < 1 per degree of freedom. The H 0 value is in agreement with the reported in the literature 18 and the critical redshift z acc = 0.89 is consistent with BAO and CMB data 19 . We have found that in the redshift interval where is trustworthy compared with old stellar sources the model is free from the cosmic-age problem.
2012-06-01T13:39:58.000Z
2012-06-01T00:00:00.000
{ "year": 2012, "sha1": "598038ef4caea927db58d6519d7e2912bbc101a8", "oa_license": "CCBY", "oa_url": "https://bibliotecadigital.exactas.uba.ar/download/paper/paper_0094243X_v1471_n_p39_Chimento.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "598038ef4caea927db58d6519d7e2912bbc101a8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
7264462
pes2o/s2orc
v3-fos-license
Plant asymmetric cell division regulators: pinch-hitting for PARs? Like animals, plants use asymmetric cell divisions to create pattern and diversity. Due to a rigid cell wall and lack of cell migrations, these asymmetric divisions incur the additional constraints of being locked into their initial orientations. How do plants specify and carry out asymmetric divisions? Intercellular communication has been suspected for some time and recent developments identify these signals as well as point to segregated determinants and proteins with PAR-like functions as parts of the answer. Introduction and context Dividing asymmetrically is a complicated task for eukaryotic cells. The cell must generally define an axis, localize fate determinants, and coordinately orient the mitotic spindle. Both plant and animal cells must accomplish these tasks, but unlike animal cells, plant cells are surrounded by a cell wall that impedes cell movement. In most plant cell types, the future division plane is set, not by the spindle midzone position, but by the position of another cytoskeletal array. This array, the pre-prophase band (PPB), acts prior to mitosis and spindle assembly [1]. In the plant male germline, asymmetric divisions involve microtubule-dependent nuclear migration and a unique microtubule array at the future germ pole [2]. These mechanical features make asymmetric divisions, and the orientation of those divisions, particularly important for generating the overall cell pattern during plant development and indicate that plants may have needed to develop distinct methods for generation of cell asymmetry. Asymmetric divisions are universally used to create cellular diversity and allow stem cell self-renewal. Four well-studied plant developmental contextsthe embryo, root meristem, the epidermal stomatal lineage, and the male germlineexemplify key functions of asymmetric divisions ( Figure 1). The creation of cellular diversity by asymmetric division begins during Arabidopsis embryogenesis when the first zygotic division generates a small apical cell (the progenitor of most of the embryo) and a large basal cell (the future extra-embryonic suspensor) [3]. Asymmetric or oriented divisions are associated with the generation of major tissue layers; they also create what will later become the stem cell niche of the root when an asymmetric division of the hypophyseal cell yields a large basal daughter cell that generates the columella stem cells and a small apical cell that will give rise to the quiescent center (QC). The QC consists of a group of mitotically inactive cells that maintain adjacent stem cells [4]. Plants continue producing organs post-embryonically; much of this growth initiates from activities of the stem cell pools in the root and shoot; however, asymmetric divisions are also vital for de novo generation of cell lineages and patterns, as seen during stomatal development. Here, precursor cells of the stomatal lineage are generated by asymmetric divisions of meristemoid mother cells (MMCs) that are chosen, seemingly at random, from a field of equivalent protodermal cells. The daughters of the division include a stem cell-like meristemoid and a larger stomatal lineage ground cell (SLGC). The meristemoid, after completing a limited number of selfrenewing asymmetric divisions, will differentiate into a guard mother cell and then divide symmetrically to form a pair of stomatal guard cells. Although the SLGCs often differentiate into 'default' epidermal cells, they can also re-enter an asymmetric division phase, become MMCs, and produce satellite meristemoids whose positions are coordinated with neighbor cells through precise orientation of their asymmetric divisions [5]. Plants again employ intrinsically asymmetric divisions during male germline formation; here, an asymmetric division of the haploid microspore generates two unequally sized daughter cells: a non-germline vegetative cell that exits the cell cycle and a germ cell that divides again to form twin sperm cells for each pollen grain [6]. A unifying tenet of asymmetric cell division is that it generates two daughter cells with distinct fates. There are a number of ways that this endpoint can be reached, with some mechanisms requiring the establishment of polarities before division and others acting to influence the behavior of the daughters afterward. Screens for polarity regulators in animals led to the identification of the partitioning defective (PAR) proteins, the most conserved of which act in an asymmetrically localized cortical complex that establishes cell polarity and segregates fate determinants, thereby influencing the physical and fate asymmetries of daughter cells [7]. Homologues of the PARs are not found in plant genomes. Do plants use different proteins for analogous PAR-like functions? Here, we take individual elements of the PAR paradigmthe ability to promote asymmetric fates, polarized localization within dividing cells, and roles in segregating fate determinantsand discuss recent results from studies of plant embryos, roots, stomata, and pollen in reference to these behaviors and functions ( Figure 2). Major recent advances Mutations in several genes lead to defects in specifying or maintaining unequal daughter cell fates. The asymmetric division that creates the embryo and suspensor requires mitogen-activated protein kinase (MAPK) signals, including core cascade elements YODA (YDA), a MAPK kinase kinase [8], MPK3 and MPK6 (MAPKs) [9], and a spermsupplied upstream regulator, SHORT SUSPENSOR (SSP), which activates MAPK signaling in the zygote [10]. Mutations in these components equalize presumptive embryo and suspensor cell sizes and identities. In the later hypophyseal division, two type 2C protein phosphatases, POLTERGEIST (POL) and POLTERGEIST-LIKE 1 (PLL1), are redundantly required to generate asymmetry; in pol/pll1 mutants, the hypophyseal division occurs symmetrically and neither of the resulting daughter cells adopts the appropriate fate [11]. There is some evidence for re-use of these putative signaling elements; for example, POL and PLL1 are used for the establishment of other stem cell populations [11] and the MAPK cascades are used again for stomatal divisions [9,12]. Later stomatal decisions also require unique regulators such as BREAKING OF ASYM-METRY IN THE STOMATAL LINEAGE (BASL); when BASL is present, only 12% of meristemoids or MMCs divide asymmetrically, with the daughter cells often adopting the same fate [13]. POL/PLL1, SSP, YDA, and MPK3/6 are likely to be elements of pathways that lead to the appropriate timing or placement of divisions (or both) in response to information originating from outside of the asymmetrically dividing cell. This is consistent with the long-held view that plant cells derive their identity primarily by position (reinforced by cell-cell communication, for example [14]). However, it is now clear that intrinsic mechanisms like unequal inheritance of proteins are also likely to contribute to generating daughter cells with different fates. Transcripts of WUSCHEL-related homeobox 2 (WOX2) and WOX8, members of the homeodomain transcription factor family, while initially expressed throughout the zygote, are apparent in only the apical and basal cell, respectively, after its asymmetric division [15,16]. Similarly, WOX5 is expressed in the hypophyseal cell but after division is found in only the apical daughter cell [16]. In the root meristem, the NAC (NAM, ATAF1/2, and CUC2) domain transcription factor FEZ promotes the asymmetric division of columella stem cells and displays a dynamic localization pattern in both the stem cells and their daughters. FEZ protein is in pre-division stem cells, but immediately after division, the 'stem cell' daughters lack FEZ while their terminally differentiating sisters express FEZ; FEZ expression is only later reestablished in the daughters with stem cell identity [17]. In male germline formation, transcripts of the F-box protein FBL17 are expressed in the microspore but the FBL17 protein is expressed only in the germ cell following asymmetric division [18]. While FEZ, members of the WOX family, and FBL17 show differential expression following asymmetric division, their subcellular localization and dynamics during the entire division process have not yet been reported; thus, we do not know whether any of these proteins are truly differentially segregated. Their molecular identities are also not easily reconciled with a direct role generating intrinsic cellular polarities. If we take the cue from the PARs that polarity-generating proteins will exhibit polarized expression at the cell cortex, then what are the plant candidates? The pinformed (PIN) family of auxin transporters is tied to cellular and organismal polarity generation and several of these proteins are localized to a single face of a cell (reviewed in [19]); however, there is scant evidence that the PINs are segregated during asymmetric divisions. Maize PANGLOSS1 (PAN1), a receptor-like protein, is highly polarized in stomatal lineage cells that will undergo asymmetric division, pointing to a possible role in generating pre-divisional asymmetry [20]. It is not clear, however, whether PAN1 is asymmetrically inherited [20]. Another stomatal protein, BASL, is both polarized and asymmetrically inherited. Prior to an asymmetric cell division in meristemoid cells or MMCs, BASL is both nuclear and in a polarized peripheral crescent [13]. Immediately following asymmetric division, BASL can be found in the nucleus of the smaller cell and at the periphery as well as the nucleus of the larger daughter cell, with the main activity of BASL being ascribed to the peripheral pool [13]. Ectopic expression of BASL produces a localized zone of cellular outgrowth but does not appear to alter cell fates; thus, it too fails to behave in a manner completely analogous to the PARs. It is not known how the domain of peripheral BASL is established or whether BASL generates or responds to an earlier cellular polarity; interestingly, this polarity is likely transient because the BASL crescent disappears from one site and is reestablished in a new polar crescent in redividing SLGCs [13]. Future directions Unlike in animal systems where the PAR proteins coordinate multiple aspects of asymmetric division in many developmental contexts, in examples from the germline, embryos, meristems, and stomata, we see a great diversity of regulators. Are there specific aspects of these different plant cell divisions that necessitate different controls? Or will homologues of regulators from one context participate in others? In each of these cases, what is the connection between developmental specification of asymmetry and the execution of oriented division? Several proteins have recently been shown to translate the position of the PPB into the subsequent new wall position [1,21], but virtually nothing is known about how the positions of the PPB and division plane are specified. Can we identify the targets of the transcription factor families or connect the highly polarized proteins (BASL and PAN1) in a mechanistic way to the process of PPB placement? After their discovery, the PARs became an intellectual scaffold for considering other asymmetric cell division proteins. Future plant studies should be guided by the constant consideration of both the logical insights from the PARs and by the unique constraints of plant development.
2014-10-01T00:00:00.000Z
2010-04-12T00:00:00.000
{ "year": 2010, "sha1": "af0a8def97592be03512148dba49a5a6d8ff6b23", "oa_license": "CCBYNC", "oa_url": "https://f1000.com/prime/reports/b/2/25/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "af0a8def97592be03512148dba49a5a6d8ff6b23", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
56484409
pes2o/s2orc
v3-fos-license
Enantioselective Michael Addition of Aldehydes to Maleimides Organocatalyzed by a Chiral Primary Amine-Salicylamide A primary amine-salicylamide derived from chiral trans-cyclohexane-1,2-diamine was used as an organocatalyst for the enantioselective conjugate addition of aldehydes, mainly α,α-disubstituted to N-substituted maleimides. The reaction was performed in toluene as a solvent at room temperature. The corresponding enantioenriched adducts were obtained with high yields and enantioselectivities up to 94%. Theoretical calculations were used to justify the stereoinduction. Introduction Maleimides have been successfully used as building blocks in many asymmetric organocatalytic transformations for the preparation of compounds of interest [1]. Among the compounds that can be prepared by the organocatalytic functionalization of maleimides, succinimides are one of the most important since the succinimide moiety is present in natural products and some clinical drug candidates [2][3][4][5][6]. Moreover, succinimides can be transformed into other interesting compounds, such as γ-lactams [7,8], which are important in the treatment of HIV [9,10], epilepsy [11,12], and other neurological disorders [13]. The most direct way of preparing enantioenriched substituted succinimides is by the organocatalytic enantioselective conjugate addition of carbon nucleophiles to maleimides [1]. These carbon nucleophiles can be generated by the α-deprotonation of pro-nucleophiles bearing acidic α-hydrogens, such as 1,3-dicarbonyl compounds, by means of chiral organocatalysts that contain both a tertiary amine suitable to deprotonate the pro-nucleophile as well as an acidic moiety [1]. The subsequent formation of a close transition state, which involves the coordination of the maleimide to the catalyst by means of a hydrogen bond and the enolate generated after deprotonation by the tertiary amine, leads to an efficient enantioselective process. However, when aldehydes are used as pro-nucleophiles, the α-deprotonation process becomes much more difficult. The corresponding conjugate addition can be obtained using primary amine-bearing organocatalysts that are amenable to create transition states after the formation of a transient enamine with the pro-nucleophile. Thus, the first organocatalytic Michael addition of aliphatic aldehydes to N-aryl-maleimides used α,α-phenylprolinol silyl ether 1 as an organocatalyst, affording much lower enantioselectivities when α,α-disubstituted aldehydes were employed [14] ( Figure 1). This type of diarylated prolinol has also been employed as an organocatalyst working in ionic liquids, although only with linear aldehydes [15]. Results and Discussion The primary amine-salicylamide 15 was prepared as reported by the monoamidation of (1R,2R)cyclohexane-1,2-diamine with phenyl salicylate in refluxing propan-2-ol [42]. The search for the most appropriate reaction conditions (Table 1) was carried out using the model Michael addition reaction of isobutyraldehyde (16a) to N-phenylmaleimide (17a). Thus, the reaction organocatalyzed by 15 (10 mol%) in toluene as a solvent at room temperature afforded the corresponding substituted succinimide (R)-18aa almost quantitatively and in an excellent 94% ee after 2 days reaction time ( Table 1, entry 1). The (R) absolute configuration of the final adduct was determined by comparing the elution order of the corresponding enantiomers in chiral HPLC with those in the literature [38]. The use of chlorinated solvents, such as dichloromethane or chloroform, afforded good yields but lower enantioselectivities for (R)-18aa (Table 1, entries 2 and 3). When a 2/1 v/v mixture of DMF/H2O was used as a solvent, the enantiomeric (S)-18aa was obtained in 79% ee (Table 1, entry 4). This inversion in the enantioselectivity of the process was also observed when the related primary amine-monocarbamate 14 was employed as an organocatalyst, and attributed to a loss of the bifunctional character of the catalyst due to the competitive hydrogenbond formation with water [38]. We also assayed the influence of the addition of some acid or basic additives. Thus, when benzoic acid (10 mol%) was added to the reaction mixture in toluene, (R)-18aa was obtained in only 77% ee (Table 1, entry 5), an enantioselectivity that rose up to 85% when LiCl was employed as a non- Results and Discussion The primary amine-salicylamide 15 was prepared as reported by the monoamidation of (1R,2R)-cyclohexane-1,2-diamine with phenyl salicylate in refluxing propan-2-ol [42]. The search for the most appropriate reaction conditions (Table 1) was carried out using the model Michael addition reaction of isobutyraldehyde (16a) to N-phenylmaleimide (17a). Thus, the reaction organocatalyzed by 15 (10 mol%) in toluene as a solvent at room temperature afforded the corresponding substituted succinimide (R)-18aa almost quantitatively and in an excellent 94% ee after 2 days reaction time ( Table 1, entry 1). The (R) absolute configuration of the final adduct was determined by comparing the elution order of the corresponding enantiomers in chiral HPLC with those in the literature [38]. The use of chlorinated solvents, such as dichloromethane or chloroform, afforded good yields but lower enantioselectivities for (R)-18aa (Table 1, entries 2 and 3). Results and Discussion The primary amine-salicylamide 15 was prepared as reported by the monoamidation of (1R,2R)cyclohexane-1,2-diamine with phenyl salicylate in refluxing propan-2-ol [42]. The search for the most appropriate reaction conditions (Table 1) was carried out using the model Michael addition reaction of isobutyraldehyde (16a) to N-phenylmaleimide (17a). Thus, the reaction organocatalyzed by 15 (10 mol%) in toluene as a solvent at room temperature afforded the corresponding substituted succinimide (R)-18aa almost quantitatively and in an excellent 94% ee after 2 days reaction time ( Table 1, entry 1). The (R) absolute configuration of the final adduct was determined by comparing the elution order of the corresponding enantiomers in chiral HPLC with those in the literature [38]. The use of chlorinated solvents, such as dichloromethane or chloroform, afforded good yields but lower enantioselectivities for (R)-18aa (Table 1, entries 2 and 3). When a 2/1 v/v mixture of DMF/H2O was used as a solvent, the enantiomeric (S)-18aa was obtained in 79% ee (Table 1, entry 4). This inversion in the enantioselectivity of the process was also observed when the related primary amine-monocarbamate 14 was employed as an organocatalyst, and attributed to a loss of the bifunctional character of the catalyst due to the competitive hydrogenbond formation with water [38]. We also assayed the influence of the addition of some acid or basic additives. Thus, when benzoic acid (10 mol%) was added to the reaction mixture in toluene, (R)-18aa was obtained in only 77% ee (Table 1, When a 2/1 v/v mixture of DMF/H 2 O was used as a solvent, the enantiomeric (S)-18aa was obtained in 79% ee (Table 1, entry 4). This inversion in the enantioselectivity of the process was also observed when the related primary amine-monocarbamate 14 was employed as an organocatalyst, and attributed to a loss of the bifunctional character of the catalyst due to the competitive hydrogen-bond formation with water [38]. We also assayed the influence of the addition of some acid or basic additives. Thus, when benzoic acid (10 mol%) was added to the reaction mixture in toluene, (R)-18aa was obtained in only 77% ee (Table 1, entry 5), an enantioselectivity that rose up to 85% when LiCl was employed as a non-protic acid ( Table 1, entry 6). The addition of an organic base, such as 4-N,N-dimethylaminopyridine (DMAP), gave similar ee than before, but in a much lower chemical yield (Table 1, entry 7). We were curious to determine if the presence of the phenolic OH on the organocatalyst was the determinant for achieving a high enantioselectivity. Thus, as the organocatalyst, we employed the primary amine-containing benzamide 19, obtained by the reaction of (1S,2S)-cyclohexane-1,2-diamine with phenyl benzoate under similar conditions as 15 (Figure 3) [38]. However, under the above optimal reaction conditions, this organocatalyst 19 gave rise to adduct (S)-18aa in a lower 87% ee ( Table 1, entry 8). Therefore, the presence of the phenolic OH in organocatalyst 15 had an influence on the enantioselectivity of the reaction. It is interesting to note that the use of a "related" monocarbamate organocatalyst 14 only gave a 67% ee for (S)-18aa when using toluene as a solvent [38]. protic acid (Table 1, entry 6). The addition of an organic base, such as 4-N,N-dimethylaminopyridine (DMAP), gave similar ee than before, but in a much lower chemical yield (Table 1, entry 7). We were curious to determine if the presence of the phenolic OH on the organocatalyst was the determinant for achieving a high enantioselectivity. Thus, as the organocatalyst, we employed the primary amine-containing benzamide 19, obtained by the reaction of (1S,2S)-cyclohexane-1,2diamine with phenyl benzoate under similar conditions as 15 (Figure 3) [38]. However, under the above optimal reaction conditions, this organocatalyst 19 gave rise to adduct (S)-18aa in a lower 87% ee (Table 1, entry 8). Therefore, the presence of the phenolic OH in organocatalyst 15 had an influence on the enantioselectivity of the reaction. It is interesting to note that the use of a "related" monocarbamate organocatalyst 14 only gave a 67% ee for (S)-18aa when using toluene as a solvent [38]. Table 2. The absolute configuration of the known succinimides 18 was assigned in accordance with the elution order of the enantiomers in chiral HPLC when compared to the literature (see Experimental Section). We also explored the conjugate addition reaction of other α,α-disubstituted aldehydes with maleimide 17a. Thus, when cyclopentanecarbaldehyde (16b) was used, the corresponding Michael adduct (R)-18ba was isolated in an excellent yield and with an enantioselectivity of 82% (Table 2, entry 9). The use of cyclohexanecarbaldehyde (16c) as a pro-nucleophile gave only a 36% ee ( Table 2, entry 10). In addition, when propionaldehyde (16d) was employed as a pro-nucleophile, a 1/1.2 mixture of diastereomers was isolated and (2S,3R)-18da and (2R,3R)-18da were obtained in 79% and 89% ee, respectively (Table 2, entry 11). Table 2. The absolute configuration of the known succinimides 18 was assigned in accordance with the elution order of the enantiomers in chiral HPLC when compared to the literature (see Experimental Section). To acquire further insight into the origin of the enantioselectivity, we carried out theoretical calculations on the reaction between isobutyraldehyde 16a and maleimide 17a in the presence of organocatalyst 15. According to our previous computational results on a related process [38], the reaction proceeds by the formation of an enamine, followed by an attack to the electrophilic maleimide substrate through an endo transition state (Figure 4). We also considered the possible exo approach, but the higher activation energies were sufficient enough to discard its participation in the process. In this situation, the two faces of the enamine were clearly differentiated. If the approach of Thus, when 16a reacted with N-arylmaleimides 17b and 17c bearing electron-donating groups on the phenyl ring, such as 4-methyl and 4-methoxy, the corresponding Michael adducts (R)-18ab and (R). 18ac were obtained with similar enantioselectivities (88% and 89%, respectively) ( Table 2, entries 2 and 3). In addition, when the N-arylmaleimide 17d bearing a chloro group at the para-position was used, adduct (R)-18ad was obtained in an 88% ee (Table 2, entry 4). The presence of electron-withdrawing groups on the phenyl ring gave rise to a lower enantioselection. Thus, when the N-substituted maleimide 17e bearing a 4-acetyl group was employed with isobutyraldehyde, succinimide (R)-18ae was obtained in only a 13% ee (Table 2, entry 5). This value increased to 70% when maleimide 17f, bearing a 4-nitro group, was used as an electrophile (Table 2, entry 6). In addition, an N-alkylated maleimide, such as N-methylmaleimide (17g), was also used as an electrophile, affording succinimide (R)-18ag with a 82% ee (Table 2, entry 7). However, when the simple maleimide (17h) was employed, the final adduct (R)-18ah was isolated in a lower enantioselectivity (56%) (Table 2, entry 8). To acquire further insight into the origin of the enantioselectivity, we carried out theoretical calculations on the reaction between isobutyraldehyde 16a and maleimide 17a in the presence of organocatalyst 15. According to our previous computational results on a related process [38], the reaction proceeds by the formation of an enamine, followed by an attack to the electrophilic maleimide substrate through an endo transition state (Figure 4). We also considered the possible exo approach, but the higher activation energies were sufficient enough to discard its participation in the process. In this situation, the two faces of the enamine were clearly differentiated. If the approach of the maleimide occurred through the upper face (from our view), as it did in TS-1R, an efficient H-bond between the C=O group of maleimide and the NH group of the catalyst was formed. This activated the electrophile and induced a low activation barrier (15.2 kcal/mol) for the formation of the R product. If the approach of the two reactants was occurring from the other face, as it was in TS-1S, the carbonyl and NH moieties were far enough to avoid the formation of any effective H-bond, and the activation energy could not be lowered, resulting in a value of 20.3 kcal/mol. Thus, the high energy difference of the two transition states nicely explained the formation of the experimental major (R)-18aa isomer. Meanwhile, the diastereomeric transition states for the reaction organocatalyzed by 19 were also computed. As mentioned before, catalyst 19 lacked the phenolic OH group, and showed similar reactivity and moderately lower enantioselectivity under similar conditions as catalyst 15 (Table 1, entries 1 and 8). Interestingly, the optimal transition states for 19 were located (TS-2S and TS-2R, Figure 5) and showed quite similar activation parameters as catalyst 15, although with enough differences to explain a moderate decrease in the enantioselectivity. For example, the Gibbs Free activation barriers leading to the two enantiomers were almost equivalent (15.2 vs 15.5 kcal/mol and 20.3 vs 20.7 kcal/mol), but the activation enthalpy difference between TS-1R and TS-1S was 6.3 kcal/mol, while the same value for TS-2S vs TS-2R was 5.2 kcal/mol. Thus, the presence of the OH induced a slight increase in the enthalpy gap between the two faces of the maleimide. Also, the critical H-bonding distances differed slightly, being shorter for TS-1R than for TS-2S (compare δ O-H in Figures 4 and 5). Thus, these data indicated that the H-bonding activation of the maleimide was optimal when the phenolic OH was present, lowering the enthalpy barrier in TS-1R the effects helped to increase the enantioselectivity observed when organocatalyst 15 was employed, rigidifying the NH-CO-Ar benzamide system, which reduced the conformational variability of catalyst 19. Meanwhile, the diastereomeric transition states for the reaction organocatalyzed by 19 were also computed. As mentioned before, catalyst 19 lacked the phenolic OH group, and showed similar reactivity and moderately lower enantioselectivity under similar conditions as catalyst 15 (Table 1, entries 1 and 8). Interestingly, the optimal transition states for 19 were located (TS-2S and TS-2R, Figure 5) and showed quite similar activation parameters as catalyst 15, although with enough differences to explain a moderate decrease in the enantioselectivity. For example, the Gibbs Free activation barriers leading to the two enantiomers were almost equivalent (15.2 vs 15.5 kcal/mol and 20.3 vs 20.7 kcal/mol), but the activation enthalpy difference between TS-1R and TS-1S was 6.3 kcal/mol, while the same value for TS-2S vs TS-2R was 5.2 kcal/mol. Thus, the presence of the OH induced a slight increase in the enthalpy gap between the two faces of the maleimide. Also, the critical H-bonding distances differed slightly, being shorter for TS-1R than for TS-2S (compare O-H in figures 4 and 5). Thus, these data indicated that the H-bonding activation of the maleimide was optimal when the phenolic OH was present, lowering the enthalpy barrier in TS-1R the effects helped to increase the enantioselectivity observed when organocatalyst 15 was employed, rigidifying the NH-CO-Ar benzamide system, which reduced the conformational variability of catalyst 19. General Information All of the reagents and solvents employed were of the best grade available and were used without further purification. The 1 H spectra were recorded at room temperature on a Bruker Oxford AV300 at 300 MHz, using TMS as the internal standard. Absolute configuration for adducts 18 was determined according to the order of elution of their enantiomers in chiral HPLC. Reference racemic samples of adducts 18 were obtained by performing the conjugate addition reaction using 4methylbenzylamine (20 mol%) as an organocatalyst in toluene as a solvent at room temperature. General Procedure for the Asymmetric Conjugate Addition Reaction A solution of 15 (0.02 mmol, 4,7 mg) and the maleimide 17 (0.2 mmol) in toluene (0.5 mL) was added to the aldehyde 16 (0.4 mmol), and the mixture was stirred at rt for 48 h (TLC). The reaction was quenched with HCl 2 N (10 mL) and the mixture was extracted with AcOEt (3 × 10 mL). The organic phase was washed with saturated NaHCO3 (10 mL) and brine (10 mL), dried over MgSO4, filtered, and the solvent was then evaporated (15 Torr) to get the crude product, which was purified by silica gel chromatography (n-hexane/AcOEt gradients). Adducts 18 were identified by the 1 Figure 5. Transition states for the formation of (S)-18aa (a) and (R)-18aa (b) catalyzed by 19. General Information All of the reagents and solvents employed were of the best grade available and were used without further purification. The 1 H spectra were recorded at room temperature on a Bruker Oxford AV300 at 300 MHz, using TMS as the internal standard. Absolute configuration for adducts 18 was determined according to the order of elution of their enantiomers in chiral HPLC. Reference racemic samples of adducts 18 were obtained by performing the conjugate addition reaction using 4-methylbenzylamine (20 mol%) as an organocatalyst in toluene as a solvent at room temperature. General Procedure for the Asymmetric Conjugate Addition Reaction A solution of 15 (0.02 mmol, 4,7 mg) and the maleimide 17 (0.2 mmol) in toluene (0.5 mL) was added to the aldehyde 16 (0.4 mmol), and the mixture was stirred at rt for 48 h (TLC). The reaction was quenched with HCl 2 N (10 mL) and the mixture was extracted with AcOEt (3 × 10 mL). The organic phase was washed with saturated NaHCO 3 (10 mL) and brine (10 mL), dried over MgSO 4 , filtered, and the solvent was then evaporated (15 Torr) to get the crude product, which was purified by silica gel chromatography (n-hexane/AcOEt gradients). Adducts 18 were identified by the comparison of their 1 H-NMR data with those of the literature. Their enantiomeric excesses were determined by chiral HPLC using the conditions described in each case. Computational Methods. All reported structures were optimized at Density Functional Theory level using the B3LYP [43][44][45] functional as implemented in Gaussian 09 [46]. Optimizations were carried out with the 6-31G(d,p) basis set. The stationary points were characterized by frequency calculations to verify that they had the right number of imaginary frequencies. The reported energy values correspond to Gibbs Free energies, including single point refinements at M06-2X/6-311 + G(d,p) [47] level of theory in a solvent model (IEFPCM, toluene) [48][49][50] on the previously optimized structures (computed structures in the Supplementary Materials). Conclusions We conclude that a primary amine-salicylamides, prepared by a simple monoamidation of an enantiomerically pure trans-cyclohexane-1,2-diamine, acts as an efficient organocatalyst in the enantioselective conjugate addition of aldehydes to maleimides, leading to enantiomerically enriched succinimides. Good yields and enantioselectivities can be achieved working in toluene as a solvent at room temperature. Theoretical calculations suggest that the phenolic OH present in catalyst 15 helps to preorganize the system, inducing a more effective H-bonding of the benzamide NH towards the activation of the maleimide. The activation can only be effective in one of the faces of the maleimide (TS-1R), leading to a high degree of enantioselectivity with 15.
2018-12-15T14:02:35.408Z
2018-12-01T00:00:00.000
{ "year": 2018, "sha1": "36208a897c2f54e24b30c06ee9169e5d8b23072f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/23/12/3299/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "834d2d010f36fb00e615daac613e68c0c2a6ee21", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
254240720
pes2o/s2orc
v3-fos-license
Tanwir Arabiyyah: Arabic as Foreign Language Journal ABSTRACT INTRODUCTION The process and way of thinking about various projects to achieve a goal can be called "planning" (Sanwil et al., 2021:135). In KBBI, the term "planning" comes from the word "plan" meaning the design or framework for something to be carried out (Hasil Pencarian -KBBI Daring, n.d.). Based on the needs analysis in determining the goals to be achieved with complete documents, the steps are continued in achieving goals effectively. Planning will be helpful in terms of the implementation both long-term and short term (Switri, 2022:54-55). Planning is inseparable from the process, in this case, the learning process, namely how teachers are able to direct learning, are able to help and provide guidance to students in the learning process which later becomes an experience for students in learning. This also concerns the learning environment or the process of interaction between students and teachers in a learning environment. Therefore, in order to run and achieve goals, it is necessary to have a plan to then implement and evaluate whether the planned plan has been implemented effectively and efficiently. A good, relevant, and flexible learning process cannot be separated from a learning plan. The main and most important element for teachers is a systematic syllabus and also having continuity between the material and the competencies possessed by students, so that the results obtained will also be more satisfying. Based on the explanation above, the author would discuss Arabic language learning planning which consists of definitions, sources, goals, and content. METHOD Through a qualitative approach, this study used the literature review method to analyze the data. The types of data used were sourced from various literature, books, notes, magazines, relevant studies, and related articles (Azlia Cahyani Ngalawi & Hakim Zainal, 2020). The steps of data collection are; (1) collect data in accordance with the sub-topics discussed, (2) collect and classify data from sources for analysis, (3) analyze the advantages and disadvantages related to the data, (4) conclude the results of the analysis with ideas (Nazhyfa et al., n.d.). Definition of Arabic Learning Planning Roger A. Kaufman (Perspektif Manajemen Pembelajaran Program Keterampilan, 2016:12) as an educational figure (United States International University) stated that learning is a procedure that will be carried out by teachers to achieve the desired goals. Nana dan Sukirman in Mushlih )2019) stated that learning planning is an elaboration, enrichment and development of the curriculum. Planning in Arabic is called ‫يط‬ ‫ط‬ ‫خ‬ ‫ت‬ for example ‫يط‬ ‫ط‬ ‫خ‬ ‫ت‬ ‫ناهخ‬ ‫)امل‬ curriculum planning), ‫وي‬ ‫ترب‬ ‫ال‬ ‫يط‬ ‫ط‬ ‫تخ‬ ‫)ال‬ learning planning) ‫ية‬ ‫لم‬ ‫ع‬ ‫يط‬ ‫ط‬ ‫تخ‬ ‫)ال‬ planning process). The term ‫يط‬ ‫ط‬ ‫خ‬ ‫ت‬ is a word indicating an understanding of the conceptuality of the various activities carried out (Switri, 2022). Majid in "Manajemen Mutu Pendidikan (2016)" on planning as determination in work must be carried out by the community or group in achieving goals. Before the activity is carried out, planning must be formulated. Teachers must ensure that they have competence in what is taught, what teachers can do, and what is expected from learning (goals). Based on the explanation above, it can be concluded that planning is designed in a systematic, logical, and well-organized program in accordance with clear intentions and goals. In essence, an instructional plan is essentially a part of education as well as learning, and of course, this is intended to later separate knowledge, education, and other learning. The process of determining a decision from a result of rational thinking on discussing certain learning goals, as well as changing attitudes and behavior by utilizing all existing capabilities and learning resources is called learning planning, Widyastuti et al., (2021). It is necessary to have planning before teaching in class, this aims to achieve learning goals. As for learning goals, specifically in this case Arabic learning goals, namely to improve a language skill, and these language skills start from good and correct nahwu and sharaf, and are also able to increase vocabulary, while language skills are divided into four parts, namely maharah qira 'ah, kalam, istima; and the book (reading, speaking, listening and writing) (Farhad & Sa'diyah, 2021). Planning before teaching is the most important thing in the success of teachers. Planning must be in the worksheet. The importance of learning planning is as follows; (1) can help teachers during teaching, (2) can provide comfort to students, (3) a control tool for an institution, also to achieve curriculum goals (Putrianingsih et al., n.d.). In another opinion, learning planning aims to make the process run by providing an understanding related to learning goals, learning strategies, learning techniques, and also the media used to achieve a goal. This learning plan can later change the behavior of students and all a series of activities carried out in the learning process can make achieving goals designed and made. Therefore, the existence of a learning plan can later become a guideline for planning a lesson according to the desired needs. Gentry in Ghozali et al., n.d. In the explanation that has been explained, a conclusion can be drawn regarding learning planning as something very important in a learning process, in order to achieve the desired goals, it must be considered to determine how the learning process is carried out in accordance with the goals, and a learning plan must pay attention to how the method, strategies, techniques and media that will be used in carrying out the learning process in the classroom. Sources of Arabic Language Learning Planning According to Law on National Education System Number 20 of 2003, learning is the interaction of students, teachers, and learning sources in a learning environment. Learning is designed as an interaction between teachers and one or more people to develop the knowledge, skills, and learning experiences of students. Based on the Law of the National Education System, learning planning is an important step that must be carried out by teachers before learning and educational activities and achieving the final goals of learning. Learning is not just a routine activity, it is didactic communication with messages, systems, procedures, and goals. Therefore, teachers must prepare themselves carefully to carry out the learning process. Goals of Arabic Language Learning Planning The main goal of preparing learning plans is to facilitate the implementation of learning. The learning plan is a reminder for teachers to prepare, use media, choose learning strategies, use class systems, and other technical issues. As happens in the field, there are always different ways to achieve optimal results. When making plans, decisions are made regarding which option is best, so that the process of achieving goals runs efficiently. Therefore, planning allows several things; (1) there is a process planned as well as possible aimed at avoiding an accidental success, meaning in this case it can be seen to what extent a success has been achieved, 2) it is used in solving problems, or in other words tools can be used in solving problems, 3) being able to use a learning source properly, 4) with planning, learning is not carried out in a hurry or suddenly but in a welldirected way (Cahya Edi Setyawan, 2020). In Arabic language learning, in order to make students proficient, the goals are to encourage, provide guidance, and be able to develop Arabic language skills effectively and receptively. The ability to understand other people or to understand a text is called comprehension. In addition to comprehension, improving language skills as a means of spoken and written communication is called productivity. Providing knowledge in Arabic relates to sources of Islamic teachings such as the Qur'an, hadith, and Islamic books. (Keagamaan, 2018). Content of Arabic Language Learning Learning planning is related to curriculum planning, enrichment, and development. Not only the curriculum, but teachers must also see and consider the conditions and situation, and potential of the school in developing the curriculum. Thus, the impact on content or learning design models is developed in a different educational manner. In planning, it is necessary to have a syllabus with a learning plan, where these two things contain related subjects, SK (competence standards), KI (core competencies), KD (basic competencies), learning indicators, achievements, learning goals, material taught, allocation time, method, strategy, evaluation and learning sources. Planning aims to achieve goals as efficiently and effectively as possible. Likewise, before delivering learning materials, teachers must prepare a plan by determining the goals to be achieved and what methods and means are needed to achieve these goals as efficiently and effectively as possible. In designing a learning plan, teachers must first look at the components that will be explained to achieve the desired goals, while the steps in preparing a learning planning component are as follows: Formulating Goals The formulation of learning goals can refer to Bloom's Taxonomy (1956), where learning goals are divided into 3 parts, namely: (1) a domain discussing activities carried out by thinking (cognitive), 2) a domain explaining attitudes and values (affective), 3) a domain regarding skills must be owned (psychomotor). This can be the main reference in formulating learning goals, in this case, teachers can be assisted in preparing the learning process properly, and are also able to design strategies and methods that will be carried out later, besides that teachers are also able to determine tools, media, and learning sources and determine the appropriate type of evaluation, so that later it can assist students in acquiring language skills, and to find out the relation between between goals and the curriculum used. Learning Materials Subject matter is learning material explaining the content of the curriculum and the basic competencies that must be mastered by students, the goal of this basic competency is to achieve a competency standard for each subject in an educational unit. Materials can be concepts, facts, or basic ideas of knowledge (Aflisia, 2016). For this reason, material is the most important part of a learning process, because this learning process is a collection of information that must be mastered by students as stated in the current curriculum. There are two models, namely a separate system (nadzriyah al-furu'), namely the Arabic language learning program is carried out by dividing the language into several branches of study and an integrated system (nadzariyat al-wahdah) which views language as a unified whole and is interrelated with one another (Saefuloh & Aflisia, 2022). Sudjana in (DR. Tarpan Suparman, 2020) explained things need to be considered in determining a learning material, namely: 1. Material is taught to achieve learning goals 2. The material is written in outline, no need to write in detail. 3. The material taught must be in accordance with the teaching material and this teaching material must also be in accordance with learning goals. Material from teaching sources must be written clearly and in detail. 4. The order of teaching material should pay attention to continuity. 5. Material is arranged simply and must be complex, which is arranged from easy to difficult, concrete to complex so that later students can easily understand it. 6. The material must contain factual and conceptual learning material. Factual means easy while conceptual means requiring a deep understanding. Learning Method The method of conveying useful learning materials in achieving goals, and determining the success of a learning process based on the function of a learning method and entering into an integral part of a teaching system is called the learning method. Therefore, it can be concluded that the learning method is the way or steps used to convey a material or teaching material from teachers to students in order to support the success of the designed learning goals Dr. Rusydi Ananda et al.,. Learning Media Learning media is useful in facilitating teachers in conveying the aims and goals of the material being taught to students. Media can make it easier for students to understand the intent of the messages conveyed by teachers, and media is also useful in motivating students in learning (Ritonga et al., 2016). Many media that can be used as learning aids include audio, audiovisual, and visual. Learning Sources Learning sources in the learning environment are used functionally in optimizing learning outcomes which can later motivate students in learning, so as to accelerate students in mastering the knowledge or material being studied Assessment of Learning Outcomes Assessment of learning outcomes refers to methods or techniques for determining the results achieved by students. In the context of learning planning, pedagogy makes assessment an important part of the learning itself. This means assessment is an integral part of planning and implementation. Assessment certainly aims to find out the learning outcomes of students and evaluate the effectiveness and efficiency of educational activities as material for the development and improvement of educational programs. In general, the evaluation of learning outcomes aims to see how far the learning program can achieve the predetermined goals. In particular, Reece and Walker as cited by Aunurrahman (2011:209) explain the evaluation of learning outcomes aims as follows; (1) strengthening learning activities, (2) testing students' understanding and skills, (3) supporting the implementation of learning activities, (4) maintaining quality standards, (5) accelerating learning processes and outcomes, (6) predicting future learning outcomes, and (7) evaluating the learning quality (Abdulrahaman et al., 2020); (Ritonga, Wahyuni, et al., 2023). Thus, the evaluation makes it possible to find out how the achievements of students in mastering and understanding the material being taught does not only apply to teachers but also to the students themselves. Students can find out which material needs need clarification or not, and of course regarding the next program improvement. CONCLUSIONS Learning planning is a systematic process of learning development used to ensure the quality of learning based on theory. According to Law on National Education System Number 20 of 2003, learning is the interaction of students and teachers and learning resources in a learning environment. Learning is designed as an interaction between teachers and one or more people to develop the knowledge, skills, and learning experiences of students. By improving learning, it is expected to improve the quality of learning carried out by learning designers.
2023-06-22T15:02:31.156Z
2023-06-15T00:00:00.000
{ "year": 2023, "sha1": "b602df7240eeff982521768d1092f3dc58611639", "oa_license": "CCBYSA", "oa_url": "https://jurnal.umsb.ac.id/index.php/aflj/article/download/3957/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f984845403e5f0541f2122894bd9768ebcc80fb1", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
104664967
pes2o/s2orc
v3-fos-license
The Potential Application of Heavy Ion Beams in the Treatment of Arrhythmia: The Role of Radiation-Induced Modulation of Connexin43 and the Sympathetic Nervous System It has been known that heart disease—such as myocardial infarction (MI), cardiac hypertrophy, or heart failure—alters the molecular structure and function of the gap junction, which can lead to an abnormal heart rhythm. Radiation has been shown to modulate intercellular communication in the skin and lungs by increasing connexin43 (Cx43) expression. Understanding how Cx43 upregulation is induced in a diseased heart can help provide a new perspective to radiation therapy for arrhythmias. In a recent study with rabbits after MI, carbon ions were accelerated to 290 MeV/u and extracted in the air; a biologically (cell kill) uniform 6-cm spread-out Bragg peak beam was generated, and beam tissue depth was set to 30 mm with energy degraders to the depth position. Targeted heavy ion irradiation (THIR) with 15 Gy to the left ventricle increased Cx43 expression, improved conductivity, decreased the spatial heterogeneity of repolarization, and reduced the vulnerability of rabbit hearts to ventricular arrhythmias after MI. In clinically normal rabbits, THIR . ¼ 10 Gy caused a significant dose-dependent increase of Cx43 protein and messenger RNA 2 weeks after irradiation. The left (irradiated) and right (nonirradiated) ventricles exhibited circumferential upregulation of Cx43 lasting for at least 1 year. There were no significant changes in electrocardiograms and echocardiograms, indicating no apparent injury for 1 year. A single exposure of 135 MeV/u THIR with 15 Gy to a dog heart attenuated vulnerability to ventricular arrhythmia after the induction of MI for at least 1 year through the modulation of Cx43 expression. This long-lasting remodeling effect on gap junctions may lay the groundwork to novel therapies against life-threatening ventricular arrhythmias in structural heart disease. To date, there have been few investigations into the effects of carbon-ion irradiation on electrophysiological properties in the human heart. Patients with mediastinum cancer were investigated for 5 years after treatment that included irradiation to the heart, and investigators found that carbon-ion beam irradiation to the heart is not immediately cardiotoxic and demonstrates consistent signals of arrhythmia reduction. Its practical application in non–cancer treatment, such as in arrhythmia treatment, is highly anticipated. Introduction There are 450 000 sudden cardiac deaths in the United States per year, and ventricular tachycardia/ventricular fibrillation (VT/ VF) is reported as the cause in around 70% to 80% of the cases. Recently, it has been demonstrated that atrial fibrillation is associated with an increased risk of cardiovascular events and sudden cardiac death [1]. An analysis of 5200 adults age 28 to 62 years participating in the Framingham Heart Study showed that the lifetime risk of early death due to sudden cardiac arrest is lower for women (1 in 30) tan men (9 in 30) [2]. An implantable cardioverter defibrillator has been reported to be useful for preventing sudden deaths, but its frequent use lowers not only quality of life but also survival rates [3]. Myocardial damage from electric shocks is thought to deteriorate cardiac function, but its mechanism remains unknown. A combination therapy with antiarrhythmic agents and catheter ablation is effective in reducing the number of ICD implantations. Amiodarone is effective in patients with organic heart diseases, but continued use over time can reduce its effect or cause side effects. In contrast, catheter ablation may to lead to a cure, but surgeons need advanced skills and knowledge to undertake this operation. Catheter ablation is often effective in patients with endocardial lesions but can be hard to apply in those with epicardial lesions. Therefore, developing a new antiarrhythmic therapy is an important endeavor that appeals to both scientific and societal interests. Myocardial Gap Junction Protein and Arrhythmia Occurrence Three microstructures interconnect cardiomyocytes: the gap junction, desmosome, and fascia adherens. The latter 2, the desmosomes and fascia adherens, are the atrioventricular nodes of the cytoskeleton and contractile protein, respectively. As mechanically linked sites between cells, they do not contribute to excitation and conduction. In contrast, a gap junction comprises aggregates of multiple connexons interwoven with the cell membrane lipid bilayers and connexons from 2 adjacent cells with a narrow gap (of about 20Å) that bind to each other and form an intracellular channel with a central diameter of 1 to 2 nm. Small molecule ions (,1 kDa) passing through them initiate electrical and metabolic connections that result in gap junction intercellular communication. Each connexon is composed of 6 subunits (known as connexin, Cx) ( Figure 1A) [4]. A Cx is a protein with 4 transmembrane domains and a long intracellular C-terminus [5]. The Cx gene family comprises multiple genes and, in humans, .20 types of isoforms have been identified [6]. Mammalian cardiomyocytes express 3 types of Cxs-Cx40, Cx43, and Cx45-with molecular weights of 40 kDa, 43 kDa, and 45 kDa, respectively. Of these, Cx40 is found primarily in the atrial muscle and is abundantly expressed in the intraventricular stimulation conduction system (His bundle, bundle branches, and Purkinje fiber); Cx43 is most commonly expressed in the atrial and ventricular muscles; and Cx45 is mainly expressed in the sinoatrial/atrioventricular nodes ( Figure 1B) [7]. For normal heart function, coordination of excitation and contraction via the gap junction is indispensable [8]. The diseased heart-owing to myocardial infarction (MI), cardiac hypertrophy, heart failure, and so on-shows that there is a transformation of molecular structure or function remodeling in the gap junction plaque [9] and forms a substrate of VT/VF and atrial fibrillation. ''Substrate'' is defined as a physiological and/or structural change that is identified as a low potential region by electrophysiological study. In the past, several studies have attempted to restore the electrical connections of ventricular myocytes by increasing the amount of Cx43 protein, such as with activation of Cx43 using endothelin-1 and angiotensin II [10], thyroid hormone analogs [11], rotigaptide (ZP123) [12], or nitrofen and vitamin A [13]. In mouse experiments, transplanting Cx43-expressing myocytes has been shown to significantly decrease post-MI VT inducibility [14]. Antiarrhythmic Action Due to Heavy Ion Irradiation in Animal Experiments Gap junctions are distributed not only in the heart but also in other organs and are responsible for various functions, such as cell adhesion, cytoskeleton formation, adjacent cell homeostasis, and cancer cell suppression [15,16]. In the presence of highly malignant cancer cells, gap junctions control signal transduction, contributing to tissue invasion and suppression of metastasis [17]. In the field of oncology, it has been known since the 1990s that x-ray irradiation of rat alveolar epithelium [18] and mice skin cells [19] elevates the Cx43 protein level. A study using human diploid fibroblast cultures showed that alpha-ray irradiation improved intercellular communication by increasing the Cx43 level [20]. Activation of the Cx43 promoter is said to be induced by the nuclear factor of activated T cells and activator protein-1 [21,22], and it has been demonstrated that Cx43 expression is dependent on the radiation dose [23]. We hypothesized that by focusing on the phenomenon that radiation promotes the differentiation induction of the Cx43 protein, Targeted heavy ion irradiation (THIR) to the diseased heart can lead to the recovery of reduced Cx43 expression. In cancer treatment with particle beam therapy, the irradiated area is determined by manipulating the spread-out Bragg peak. Focusing on this phenomenon, in 1997, we launched a noninvasive arrhythmia-treatment study using heavy ion beams at the Heavy Ion Medical Accelerator in Chiba as a joint study with the National Institute for Radiological Science [24]. The THIR was performed using 290 MeV/u carbon beams and a 6-cm spread-out Bragg peak as used in cancer treatments. An irradiation field of 2 3 2 cm 2 was set up with a rabbit's left ventricular free wall as the target. The depth from the pericordial skin surface to the anterior surface of the left ventricle was estimated to be 2 to 3 cm considering variations in cardiac motion and respiration. Rabbits with nontransmural MI by microsphere injections received a single dose of 290 MeV/u carbon beams with 15 Gy to heart. After 2 weeks, a considerable increase in the expression of Cx43 was detected in the infarcted area by immunostaining, reverse transcription polymerase chain reaction, and western blot ( Figure 2). Electrophysiological experiments in vivo showed improved spatial heterogeneity of action potential duration, improved conduction velocity, and reduced VT/VF inducibility ( Figure 3). These results suggest that heavy ion beams improve the electric coupling of ventricular myocytes via Cx43 upregulation and result in antiarrhythmic action [25]. As a subsequent study, we examined the duration of time-and dose-dependent effects on Cx43 using normal rabbits without MI. Comparing doses of 5 Gy, 10 Gy, and 15 Gy showed that Cx43 expression was significantly elevated when the dose was more than 10 Gy; we found that Cx43 expression was significantly elevated with doses . ¼ 10 Gy, and the effect persisted for at least 1 year at 15 Gy ( Figure 4). Neither cardiac contractility nor dilated capacity was detected by cardiac ultrasound examination for a year, nor was pathological degeneration of the myocardium observed [26]. Next, we performed experiments at RIKEN using beagle dogs with non-transmural MI who received 135 MeV/u carbon beams to 15 Gy. After 1 year, a signal-averaged electrocardiogram examination showed an improved late ventricular potential deterioration after infarction ( Figure 5) and a lowered induction rate of VT/VF [27]. Cardiac Sympathetic Denervation After Heavy Ion Irradiation There is a strong relationship between arrhythmia and cardiac sympathetic nerve remodeling [28] in addition to gap junction remodeling. In the human heart, extrinsic sympathetic innervation is mediated via the cervical, stellate, and thoracic ganglia [29]. Anesthetic inhibition or surgical resection of the stellate ganglion reduces ventricular arrhythmias [30,31]. Iodine-123 metaiodobenzylguanidine ( 123 I-MIBG) imaging has been a useful tool in diagnosing sympathetic function and distribution and, thus, has been clinical used in patients with organic heart disease to evaluate the risk of ventricular arrhythmias [32]. In patients with atrial fibrillation, sympathetic fibers and parasympathetic fibers are densely distributed over the roof of the left atrium and pulmonary veins [33]. Animal studies of dogs with pacing-induced heart failure have demonstrated that left and right thoracic sympathetic ganglion ablation reduces the number of atrial tachycardia episodes compared with a control group without stellate ganglia ablation [34]. Modifying autonomic nerve response is key to treating arrhythmia [35]. In our first experiments in 1997 using 290 MeV/u carbon beam with a high dose of 90 Gy for non-transmural MI rabbits [36], we aimed to examine the influence of heavy ion on the cardiac sympathetic nerve. Examination with 123 I-MIBG autoradiography 1 month after exposure confirmed uniform sympathetic denervations that corresponded to the irradiated area of the left ventricle. In contrast, myocardial necrosis was not widespread across the irradiated region, and its contractility was not affected. These results were presented at the American Heart Association conference in 2000 as the world's first preliminary study of heavy ion beam application for antiarrhythmic therapy. An electrophysiological test following irradiation with heavy ion showed distinct suppression of VT/VF inducibility compared with the control [37]. Prior to the presentation, a 3dimensional model of varying severity of the denervated region using 125 I-MIBG autoradiography was successfully constructed, and it has been demonstrated that the instability of the myocardial action potential increases in an inhomogeneous denervated region [38]. Since the dose used in the study at that time, a single bolus of 90 Gy, was high, it is unclear whether the same effect will occur with lower radiation doses. The mechanism of cardiac sympathetic denervation due to radiation is also unclear. The myocardium is radioresistant; therefore, the onset of new malignant tumors accompanying radioactive myocardial damage is rare. In contrast, late occurrence of MI due to coronary artery occlusion is known. Such side effects are primarily In the control, Cx43 formed clusters of punctuate immunofluorescence domains confined to well-organized intercalated disks running perpendicular to the longitudinal axis. Targeted heavy ion irradiation (THIR) resulted in a characteristic increase in immunopositive Cx43 not only at the intercalated disk regions but also at the lateral cell borders during the entire follow-up period for 1 year. (B) The proportion of total cell area occupied by Cx43 immunoreactive signals in the combined analysis of 25 rabbits (5 in each group). The radiation resulted in a significant increase in the immunopositive signals by 71% to 116% compared with the control group (P , .05). We estimated the proportion of Cx43 label at the lateral cell surface (LS) over the total label, including both at the LS and at the intercalated disk region (ID). The values (LS/ID þ LS) after THIR (28.2% 6 10.9% at 2 weeks, 30.2% 6 15.7% at 3 months, 28.1% 6 11.6% at 6 months, and 18.9% 6 14.2% at 12 months) were all significantly larger than the controls (9.3% 6 6.3%; P , .05). (C and D) The amount of Cx43 protein estimated by western blotting and the level of Cx43 messenger RNA estimated by reverse transcription polymerase chain reaction were also increased after THIR throughout the entire follow-up period from 2 weeks to 12 months by 37% to 55% and by 24% to 59%, respectively, compared with the controls (P , .05). attributable to x-rays, and long-term observation data for heavy ion beams are still insufficient. If a risk of complications increases alongside the antiarrhythmic effect, the benefits of particle beam may be canceled out. There is a need to evaluate the safety of heavy ion beams through long-term follow-up of patients. Future Directions With Clinical Application As a preliminary study of arrhythmia treatment with heavy ion beams, we examined the influence of heavy ions on impulse conduction to the myocardium, focusing on patients who underwent thoracic radiation therapy for mediastinal tumors. Eight patients were enrolled in a prospective study between April and December 2009 (2 men and 6 women; average age, 72.5 years). The total irradiation dose was 44 to 72 Gy (RBE), and the heart irradiation dose was 1.3 to 19.1 Gy (RBE). A highresolution ambulatory electrocardiogram was performed before and after carbon-ion radiotherapy to evaluate arrhythmic events, depolarization abnormality by late potentials, and autonomic nerve function by heart rate variability. The timing of examinations were within a week before radiotherapy and 1 month after the final planning of radiotherapy. The results revealed that, before irradiation, supraventricular and ventricular arrhythmia (including premature atrial contraction, paroxysmal atrial fibrillation, atrial fibrillation, and premature ventricular contraction) was observed in 5 patients, and 4 patients improved after irradiation, whereas 1 remained unchanged ( Figure 6). Depolarization abnormalities improved in 2 patients with respect to both atrial and ventricular late potential, and there were no cases of deterioration. Six patients who were irradiated to both sides of the stellate ganglion showed either a reduction of relative sympathetic tone or no deterioration by heart rate variability analysis. Total, low-frequency, and high-frequency power increased during the 24 hours after radiotherapy compared with that before radiotherapy, whereas low-frequency/high-frequency was relatively decreased. These results were similar in the analyses for both the day (8:00-21:00) and night (23:00-6:00) periods. At the 5-year follow-up, 6 patients had died of cancer and 2 were alive, neither of whom had a history of hospitalization due to a cardiac event. As mentioned earlier, it is possible that carbon ion radiotherapy does not result in acute cardiotoxicity and results in antiarrhythmic effects caused by arrhythmia substrate or autonomic nerve modifications [39]. This study has several limitations. First, causality is difficult to establish in a relatively low number of patients with various arrhythmias; therefore, the data are vulnerable to biases. Nevertheless, it was suggested that the antiarrhythmic action obtained as a secondary effect may be a silver lining for cancer patients suffering from arrhythmia. When applying a heavy ion beam in arrhythmia treatment, accurate targeting can be expected, especially with the epicardial arrhythmia substrate, to take advantage of the Bragg peak characteristics. In the United States, the feasibility of in vivo atrioventricular node ablation was investigated in Langendorff-perfused porcine hearts using a scanned carbon beam [40]. In Europe, a technique of minimally invasive ablation from outside the body using a carbon ion was tested as an alternative therapy to catheter ablation in patients with atrial fibrillation [41]. Clinical trials in progress are also analyzing the application of particle beams against VT (Phase I/II Study, NCT02919618, ENCORE-VT). Although not a particle beam study, in a study of stereotactic irradiation with x-rays to the trunk for VT in humans, all 5 patients reported a significant suppressive effect [42]. Although unresolved problems remain, such as safety and the establishment of a minimum effective dose, the practical application of heavy ion beams in non-cancer treatment, such as in arrhythmia treatment, is highly anticipated. ADDITIONAL INFORMATION AND DECLARATIONS Mari Amino and Koichiro Yoshioka participated equally in the current study. Conflicts of Interest: The authors declare no conflicts of interest associated with this manuscript.
2019-04-10T13:12:02.392Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "439185b1d4f1ab9e537ae6cf2d9385753f2f0b7a", "oa_license": "CCBY", "oa_url": "https://meridian.allenpress.com/theijpt/article-pdf/5/1/140/1737030/ijpt-18-00022_1.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a84a0dcb1237b060d3cbaf0a3b021960d07fedfa", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
9940605
pes2o/s2orc
v3-fos-license
Structural Genomic Variation as Risk Factor for Idiopathic Recurrent Miscarriage Recurrent miscarriage (RM) is a multifactorial disorder with acknowledged genetic heritability that affects ∼3% of couples aiming at childbirth. As copy number variants (CNVs) have been shown to contribute to reproductive disease susceptibility, we aimed to describe genome-wide profile of CNVs and identify common rearrangements modulating risk to RM. Genome-wide screening of Estonian RM patients and fertile controls identified excessive cumulative burden of CNVs (5.4 and 6.1 Mb per genome) in two RM cases possibly increasing their individual disease risk. Functional profiling of all rearranged genes within RM study group revealed significant enrichment of loci related to innate immunity and immunoregulatory pathways essential for immune tolerance at fetomaternal interface. As a major finding, we report a multicopy duplication (61.6 kb) at 5p13.3 conferring increased maternal risk to RM in Estonia and Denmark (meta-analysis, n = 309/205, odds ratio = 4.82, P = 0.012). Comparison to Estonian population-based cohort (total, n = 1000) confirmed the risk for Estonian female cases (P = 7.9 × 10−4). Datasets of four cohorts from the Database of Genomic Variants (total, n = 5,846 subjects) exhibited similar low duplication prevalence worldwide (0.7%–1.2%) compared to RM cases of this study (6.6%–7.5%). The CNV disrupts PDZD2 and GOLPH3 genes predominantly expressed in placenta and it may represent a novel risk factor for pregnancy complications. Recurrent miscarriage cases and fertile control samples The study subjects from Estonia were recruited at the Women's Clinic of Tartu University Hospital and Nova Vita Clinic, Tallinn, Estonia since 2003. The Danish subjects have been recruited at the Danish Recurrent Miscarriage Clinics, Copenhagen and Aalborg, Denmark since 1986. In total, the current study included 558 RM patients (80 female and 39 male partners from Estonia; 229 female and 210 male cases from Denmark) (Supp. Table S1). All recruited patients had a normal karyotype tested from peripheral blood lymphocyte cultures and known clinical risk factors of RM had been excluded. Female patients had normal menstrual cycles, no uterine anomalies (by hysterosalpingography, hydrosonography or hysteroscopy) and no antiphospholipid syndrome. The female patients were further subphenotyped as either primary (Estonia, n = 46, median age 29.5 years, 5 -95th% 20. 3 -36.8 years; Denmark, n = 113, 29.0 years, 22.0 -36.8 years) or secondary RM (Estonia, n = 34, median age 31.0 years, 5 -95th% 23.0 -37.0 years; Denmark, n = 116, 30.0 years, 23.8 -39.0 years) based on the occurrence of consecutive miscarriages either before (if any) live births or following one or more live births, respectively. The study was conducted according to the Declaration of Helsinki principles and a written informed consent was obtained from each individual prior to recruitment and collection of blood samples for DNA extraction. Population-based cohort samples from Estonian Biobank (EGCUT) The carrier status of the identified RM-associated risk CNV at 5p13 was additionally determined for population-based cohort samples (n = 1000; 504 men, 496 women) drawn from the Estonian Biobank. The cohort (18 years of age and up) closely reflects the age, sex and geographical distribution of the Estonian population. Studies on the genetic structure of Europe have placed Estonia in the proximity of Northern and Eastern European populations (Nelis et al., 2009). As the EGCUT study protocol incorporates information on self-reported experience of spontaneous miscarriages of the recruited subjects and does not include confirmed medical diagnosis of recurrent miscarriage disease, the self-reported data was not addressed in this study. The entire project is conducted according to the Estonian Gene Research Act and all of the participants have signed the broad informed consent. Discovery phase: genome-wide SNP genotyping and CNV detection In the discovery phase, proportionally one-third of Estonian sample (n = 70; fertile controls, n = 27; RM cases, n = 43; Supp. Table S2) was genotyped using Illumina Human370CNV-Quad SNP array (Illumina) (Genotyping Core Facility, Estonian Biocentre) with SNP call rate >99.4% for all samples (median 99.8%). For each sample, calling of CNVs from the resulting genome-wide genotyping data was performed in parallel with two Hidden Markov Model-based algorithms. Normalized signal intensity data was analysed with QuantiSNP program (Colella et al., 2007) calling CNVs independently for each individual. The adjustment for 'genomic waves' in signal intensities (Diskin et al., 2008) was turned on with the '--doGCcorrect' key and '--emiters 25' together with '--Lsetting 1000000' were added to the default calling parameters. A parallel analysis was performed with the same genotyping data applying PennCNV algorithm (Wang et al., 2007) and using an Estonian population specific B-allele frequency file (PFB file, calculated from 1000 EGCUT samples) as a reference dataset (Nelis et al., 2009). The default parameters and adjustment of genomic waves in signal intensities was applied as in QuantiSNP analysis. The initial CNV calls from the two algorithms were merged and only CNVs that were called by both algorithms for the same individual in the same genomic loci were considered in the subsequent analysis. As demonstrated previously, the advantage of using the intersection of CNV predictions by more than one algorithm effectively minimizes the number of false positive calls (Winchester et al., 2009;Pinto et al., 2011;Kim et al., 2012). CNVs with QuantiSNP log Bayes Factor (LBF value) <5 and/or rearrangements shorter than 250 bp were excluded from the resulting list of CNVs. For the EGCUT population-based cohort samples the microarray data was processed in a similar manner using parallel CNV calling by QuantiSNP and PennCNV. Conversion of genomic coordinates Following the CNV detection in the discovery phase and descriptive statistics of identified CNVs and CNVRs, we transferred all CNVR breakpoint coordinates from the reference genome assembly (NCBI36/hg18) to the latest version of human reference sequence (GRCh37/hg19) in order to acquire the up-to-date genomic annotation data. Conversion of coordinates was performed with the liftOver web-based batch conversion tool (http://genome.ucsc.edu/cgi-bin/hgLiftOver) and using default parameters. For 15 regions in total, the conversion was not possible, as these regions were duplicated, deleted or split in the newer version of the human reference genome assembly. Functional enrichment analysis The list of genes disrupted by CNVRs was acquired for the sample set of RM patients as well as for fertile controls using Ensembl PERL API to query the Ensembl database (version 69; http://www.ensembl.org/index.html). The query was extended by 10 kb on either side of the CNVRs in order to account for possible effects on the regulatory regions within immediate proximity of the involved genes (Pinto et al., 2010). Functional enrichment analysis of subsequent gene sets was carried out separately for fertile controls and all RM patients using g:Profiler gGOSt web-based software (http://biit.cs.ut.ee/gprofiler/) (Reimand et al., 2007;Reimand et al., 2011). Considering all genes in user-provided gene lists or chromosomal regions, g:Profiler performs statistical gene set enrichment analysis to find which functional groups and/or biological pathways are significantly overrepresented among user-provided genes, often helping with biological interpretation of high-throughput experiments. Results of Gene Ontology (GO) and Reactome (REAC) datasets with up to third relative hierarchy level were taken into account and enrichment for functional terms was considered significant if the multiple testing corrected enrichment P-value was <0.05. A reciprocal functional enrichment analysis in either RM cases or fertile controls was undertaken for functional terms specifically enriched in one of the study groups to determine the group-specific level of enrichment. Experimental copy number estimation of prioritized CNVRs using TaqMan qPCR In order to define the DNA region within the prioritized CNVRs most informative for experimental copy number estimation and placement of TaqMan qPCR assays, the genomic range of each selected rearrangement was narrowed down by identifying the minimal region overlapping between the CNV carriers based on the SNP array CNV calling data. For experimental testing, seven of the prioritized CNVRs (sized 294 bp to 49.6 kb; median 10.4 kb) were targeted with one TaqMan qPCR assay, whereas two largest CNV loci, a 52.4 kb duplication at chromosomal position 5p13.3 and 84.1 kb deletion at 14q32.33, were addressed with two assays in parallel (Supp. Biosystems). Half of the samples on each plate were controls and half were RM patients in random order. Each plate included a population specific DNA pool of randomly selected fertile controls (n = 50) from either Estonia or Denmark. The copy number of a target region was determined using relative quantification method by normalization to the reference RNase P and the population-specific DNA pool. The diploid genomic copy number was calculated by multiplying the normalized TaqMan qPCR copy number estimates by two. Due to limitations of TaqMan qPCR assay to accurately determine very high diploid copy numbers, individuals with estimated locus copy number larger than four were assigned into copy number class '>4 copies per diploid genome'. Experimental confirmation of the genomic position and range of 5p13.3 CNV The confirmation of the 5p13.3 duplication endpoints estimated by SNP array was performed with four EvaGreen qPCR assays (EvaGr assays 1-4; Supp. Table S6; Supp. Figure S1) flanking the predicted breakpoints. Singleplex amplification reactions were run using 0.2 -0.4 µM primers, HOT FIREPol ® EvaGreen ® qPCR Mix Plus (ROX) (Solis Biodyne) as per manufacturer's instructions and 10 ng of genomic DNA of samples with known 5p13.3 copy number based on the data of TaqMan qPCR copy number typing. Each copy number class (2, 3 or 4 copies per diploid genome) was represented by three individuals, except for the >4 copies/diploid genome identified in only one carrier. A reference gene albumin (ALB; MIM# 103600) was amplified in parallel with target loci for every sample under the same conditions and on the same plate (primers in Supp. Table S6). All reactions were run in triplicate and detected on ABI Prism 7900HT Sequence Detection System (Applied Biosystems). Copy number was estimated using absolute quantification method and normalized to the reference gene ALB and population-specific pool of control DNAs. Fine-mapping of the 5p13.3 rearrangement The exact position of duplication breakpoint junction in three 5p13.3 duplication carriers was determined using DNA sequencing by primer walking and targeting breakpoint junction region (5.8 kb) defined based on EvaGreen mapping. The sequencing was performed using ABI Prism dGTP BigDye Terminator v3.0 Ready Reaction Cycle Sequencing Kit (Applied Biosystems) (junction-specific primers provided in Supp. Expression profile analysis of PDZD2 and GOLPH3 genes The expression analysis was performed using human tissue cDNA panels Human MTC panel I (Lot no 7080213) and Human MTC panel II (Lot no 8030311A) (BD Biosciences Clontech, Palo Alto, CA). The Human MTC panel I consists of human cDNA samples from brain (pool of n = 2 samples), heart (n = 3), kidney (n = 5), liver (n = 3), lung (n = 4), pancreas (n = 15), placenta (n = 8) and skeletal muscle (n = 7). Human MTC panel II is compiled of human cDNAs from colon (n = 5), ovary (n = 15), peripheral blood leucocytes (information not available), prostate (n = 98), small intestine (n = 32), spleen (n = 3), testis (n = 45) and thymus (n = 9). The expression profile of the GOLPH3 and PDZD2 genes was determined with TaqMan qPCR approach using pre-designed TaqMan Gene Expression Assays (Applied Biosystems). The total volume of the duplex amplification reactions was 10 µl and included 1 µl of cDNA, 0.5 µl of HPRT (MIM# 308000) TaqMan assay used as a reference, 0.5 µl of a target-specific TaqMan assay (GOLPH3, Hs00223239_m1; PDZD2, Hs01054836_m1) and HOT FIREPol Probe qPCR Mix Plus (ROX) (Solis Biodyne) as per manufacturer's instructions. All reactions were run in triplicate and detected using ABI Prism 7900HT Sequence Detection System (Applied Biosystems). Relative expression of target genes was determined by normalization to the reference transcript of HPRT. Copy number assignment of TaqMan qPCR values The copy number assignment for simple deletion and/or duplication polymorphisms at 2p11.2 and 4q25 CNVRs was performed manually due to clear clustering of TaqMan qPCR values to distinct copy number classes (Supp. Figure S5). For the multicopy 5p13.3 locus, average copy number ratio of two TaqMan assays located within the rearranged region was used for the assignment of each sample into a distinct copy number cluster with k-means clustering method in the statistical package R (ver. 2.15.0; http://www.R-project.org/). The number of clusters to be used was inferred from the number of peaks in the averaged copy number distribution data (Supp. Figure S2). Three clusters (representing diploid copy numbers 2, 3 and 4) were used for the analysis of Estonian and four clusters (diploid copy numbers 2, 3, 4 and >4) for Danish samples. (A) Genomic context of 5p13.3 involving PDZD2 and GOLPH3 genes based on UCSC database (hg19). The opposite transcription of the PDZD2 and GOLPH3 genes is indicated with blue and green arrows, respectively. DGV Struc Var, structural variation data from the Database of Genomic Variants. (B) Mapping of duplication endpoints using TaqMan qPCR and EvaGreen qPCR assays. Yellow and red arrowed lines indicate the alternative duplication regions predicted by SNP array. Blue and green boxes in the schematic representation of genes denote the coding regions of PDZD2 and GOLPH3, respectively, with exon numbers given above. Locations of qPCR assays are indicated with black arrows. Altogether, four EvaGreen qPCR assays (EvaGr assays 1 -4) and one TaqMan qPCR assay in the PDZD2 gene (Hs02781158_cn; Supp. Table S5) bordering with the predicted duplication breakpoints were used for experimental fine-mapping. Copy number values are given as mean ± SD of three representative individuals in each copy number class (2, 3 or 4 copies/diploid genome), except for copy number class of >4 copies with only one individual available for testing. Supp. Figure S2. Distribution of unrounded copy number estimations for PDZD2:GOLPH3 duplication based on TaqMan qPCR in Estonian and Danish RM cases (n = 119 and n = 439, respectively) and fertile controls (n = 90 and n = 115, respectively). All study subjects were further tested with the duplication junction-specific PCR (BP-PCR) to confirm the presence (indicated with green bars) or absence (black bars) of the tandem duplication. Based on this double estimation, all subjects were divided into subgroups of duplication carriers or noncarriers subsequently used in association testing. Separation of duplication carriers into distinct copy number classes (3, 4 or >4 copies per genome) is indicated (see also Supp. Materials and Methods). Due to limitations of TaqMan qPCR assay to accurately determine very high diploid copy numbers, individual with estimated locus copy number larger than four was assigned into copy number class '>4 copies per diploid genome'. Supp. Figure S3. Cumulative length of all CNVs per individual among the discovery phase Estonian RM cases (n = 43) and fertile controls (n = 27). The boxes represent the 25th and 75th percentiles of the data, whereas the whiskers cover the extent of the data of up to 1.5x interquartile range. The median value is denoted as the line bisecting the boxes. Study subjects with outlier values are indicated with circles. RM, recurrent miscarriage; F, female; M, male. Supp. Figure Supp. Figure S5. Comparative copy number estimations for prioritized CNV regions predicted based on SNP microarray genotyping data and precisely determined with TaqMan qPCR in the Estonian discovery sample (n = 70). SNP microarray prediction data is given according to QuantiSNP CNV calling results due to higher accuracy in copy number estimation compared to PennCNV in our dataset. Each CNV region was targeted with one TaqMan assay, except for the largest CNVR in the group, 5p13.3 (52.4 kb, predicted based on SNP array data), targeted with two assays within the locus (Supp . Table S5). Additional CNV carriers detected with TaqMan qPCR but unidentified by SNP microarray-based CNV calling are highlighted in red. TaqMan qPCR copy number values are unrounded and given per diploid genome. Supp. Figure S6. Previously reported CNVs and segmental duplications for the complex prioritized CNV regions not carried on to the genetic association study in the full Estonian sample set (Supp. Table S4). The CNV data is based on the Database of Genomic Variants (DGV) as presented in the UCSC browser (http://genome.ucsc.edu/cgi-bin/hgGateway). Chromosomal positions are given according to hg19. Red CNV tracks represent deletions, blue duplications, brown indicate both deletions and duplications in size relative to the reference and purple tracks denote inversions. Supp. Figure (DGV). Presence in the DGV was considered as confirmation of the prioritized CNVR as true CNV locus. g Although confirmed as copy number variable by TaqMan qPCR, the CNVR was excluded from further study due to inconsistent copy number data likely due to complex genomic context of the rearrangement (Supp. Figure S6). h CNV regions characterized by both deletion or duplication events in studied subjects. i Two TaqMan qPCR assays were applied for the quantification of the two largest regions among the prioritized CNVRs.
2018-04-03T02:22:38.834Z
2014-06-24T00:00:00.000
{ "year": 2014, "sha1": "c0525fef81bd4e48c0fd1aa88852b67353308d0a", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/humu.22589", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "017d52a1f1a88340cfaddaf7f192f518f3a16bba", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
1090329
pes2o/s2orc
v3-fos-license
Capacity and Error Rate Analysis of MIMO Satellite Communication Systems in Fading Scenarios In this paper, we investigated the capacity and bit error rate (BER) performance of Multiple Input Multiple Output (MIMO) satellite systems with single and multiple dual polarized satellites in geostationary orbit and a mobile ground receiving station with multiple antennas. We evaluated the effects of both system parameters such as number of satellites, number of receive antennas, and SNR and environmental factors including atmospheric signal attenuations and signal phase disturbances on the overall system performance using both analytical and spatial models for MIMO satellite systems. I. INTRODUCTION Multiple-Input Multiple-Output (MIMO) wireless communications systems have been a focus of academic and industrial research in the last decade due to their potentially higher data rates in comparison with Single-Input Single-Output (SISO) systems [1]. Theoretically, the overall channel capacity can be increased linearly with the number of transmit and receive antennas by using spatial multiplexing schemes [1]. Current focus on satellite communication (SatCom) systems recognizes a demand for higher data rates. Hence, it appears to be appropriate to apply MIMO to SatCom systems in order to increase the available data rate and bandwidth efficiency. The quality of service (QoS) and data rates requirements of satellite communication systems is recently on the increase. Hence, the application of multiple input multiple output techniques to satellite communication systems appear to be appropriate in order to achieve increased spectral and bandwidth efficiency [2]. Spatial multiplexing and diversity maximization schemes can be deployed to achieve better spectral efficiencies and bit error rates (BER) when compared to the classical single satellite single receive station systems. In [2], MIMO satellite uplinks and downlinks channel that are optimal in terms of achievable data rates were analyzed. The authors showed that capacity optimization is generally possible for regenerative payload designs using Line of Sight (LOS) channel models. These analysis were extended to a number of MIMO satellite communication systems in [3] and the scope was further extended to general case of satellites with transparent communication payloads component. A cluster based channel model was proposed for MIMO satellite formation systems in [4]. Based on the standardized models for terrestrial multiple input multiple output (MIMO) systems, the authors proposed a spatial model and analysed the capacity of formation flying satellite systems. In this contribution, we analyse the performance of satellite communication systems with multiple cooperating satellites in geostationary orbit (GEO) and single or multiple antennas at the ground receiving station. The analysis in this paper is based on three different modelling approaches for land mobile satellite systems. The remaining part of this paper is organized as follows. In Section II, we present the system model for MIMO satellite systems. A review of the propagation channel models considered in the paper is presented in section III. In Section IV, we derive expressions for channel capacity and bit error rates with MPSK modulation scheme. Simulation results and discussions are presented in section V. Finally, we draw conclusion in Section VI. II. SYSTEM MODEL In this section, we present the system model for single satellite, multiple receive antenna systems (SS-MRA) and multiple satellite multiple receive antenna systems (MS-MRA). A. Single Satellite -Multiple Receive Antennas (SS-MRA) Consider the downlink of a Land-mobile satellite receive diversity system consisting of a single dual polarized satellite antenna and a mobile receive station with M non-polarized antennas. The channel impulse response between the satellite and the mobile receive station can be modelled as an M × 2 MIMO communication channel where h ij is the channel between the j-th transmit polarization and the i-th receive antenna. The received signal at the mobile A matrix representation for the receive signal model in (2) is thus where y = [y 1 , y 2 , · · · , y M ] T is an M × 1 vector of the received signals at the M receive antennas, x = [x 1 , x 2 ] T is a vector of transmitted symbols on the two polarizations of the satellite antenna and n = [n 1 , n 2 , · · · , n M ] T is an M ×1 noise vector assumed to be complex Gaussian random variables with zero mean and variance σ 2 . B. Multiple Satellite -Multiple Receive Antennas (MS-MRA) We consider a satellite diversity system comprising of N dual polarized satellites and a mobile ground receiving station with M equally spaced antennas. This corresponds to a 2N × M multiantenna wireless system. However, since the satellites antennas are not co-located, the relative delay between signal transmission from each satellites need to be accounted for in the system model [3]. The received signal at the mobile station can therefore be modelled as where H si is the 2 × M impulse response matrix for the channel between the i-th satellite and the M receive antennas, y(t) = [y 1 (t), y 2 (t), · · · , y M (t)] T are the received signals, T are the transmitted signals on the two polarizations of satellite i and τ n is the relative delay experienced by signals from the nth satellite with respect to the reference satellite. III. CHANNEL MODELS We consider three different models for our evaluations in this paper. The models are the cluster based spatial satellite MIMO model [4], Loo distribution based analytical model [7], [8] and the physical -statistical land mobile satellite model [2]. A brief description of the satellite channel models is presented in this section. A. Cluster Based MIMO Satellite Model In [4], a cluster based MIMO model was proposed for MIMO satellite systems using the concept of clustering 1 in the standardized WINNER II/3GPP model for terrestial MIMO systems. The spatial model is given by [4] nm is the line of sight (LOS) component of the channel impulse response between the nth satellite and the mth ground receiver antenna. The second term in the RHS of (5) is the non-line-of-sight (NLOS) component of the channel which is modelled as a summation of P clusters, each cluster comprising of R rays. The LOS and NLOS component are modelled as and P p is the normalised power of the p-th multipath component(MPC), R is the number of rays within each cluster(assumed constant in the model), Φ is the ionospheric power loss compensation factor for each ray in the clusters, G R (θ) is the ground receive station array gain for each antenna in the array, θ rp is the AOA of the rth ray in the pth cluster, σ is the shadow fading coefficient of the rays, P is the path loss, G T (φ) is the satellite transmit antenna response for rays with AOD φ, λ is the wavelength, d s is the inter-satellite spacing, θ rp ) is the AOD of the rth ray of the pth cluster,d m is the spacing between the antennas on the mobile ground receiving station antenna array, φ rp is the AOA of the rth ray in the pth cluster,V m is the velocity of the receive station, Υ is the ionospheric angular deviation compensation and ϑ is the direction of motion of the ground receive station. B. Free Space LOS Model The free space MIMO satellite model consider the line of sight (LOS) component of the fading channel. Each entry of the MIMO impulse response matrix is defined by [2] where f c is the carrier frequency, r ij is the geometrical distance between the j-th satellite transmit antenna and i-th mobile ground receive station antenna, k 0 = 2π v0 is the wave number, v 0 is the free space speed of light and α ij is the complex attenuation of the propagation path defined as where φ is the phase of the carrier assumed equal for all antenna pairs. Since the approximation r ij ≈ r ± 3km∀i, j is applicable to the satellite systems considered in this paper, the channel path gains can therefore be approximated by [10] |α ij | ≈ |α| = C; ∀i, j where C is a constant and |a| denotes the absolute value of a. C. Analytical MIMO Satellite Model The Loo distribution [7] is often used for the analytical modelling of land mobile satellite channels. The MIMO impulse for the multi-polarization and multiantenna channel considered in this paper can therefore be modelled as a summation of two parts whereH models the shadowing effect of the channel and its entries are generated using the Log-normal distribution and H is the multipath component of the channel with Rayleigh distributed entries. The Loo distribution based analytical models characterize the channel statistics using probability density function (pdf) and cumulative distribution function (CDF). A general assumption is that the propagating wave undergo both attenuation and scattering/reflection. As given in (11), the complex channel envelope is a summation of Rayleigh and log-normal faded envelopes. The pdf of the channel is defined as [7] f (r) = where µ and σ 2 r are the mean and variance of the received signal envelope, respectively. c o gives the average power of the scattered component of transmitted signal. IV. CHANNEL CAPACITY AND BER In this section, we present the channel capacity and theoretical bit error rate (BER) expressions. A. Channel Capacity The channel capacity for a narrowband MIMO system without channel state information at the transmitter (CSIT) is generally given by Telatar's spectral efficiency equation [9] C = log 2 det(I M×M + ρHH H ) where (.) H denotes the Hermittan transpose of a matrix and ρ is the linear signal-to-noise ration value computed from the logarithmic SNR by Similar to [2], ρ is defined as the ratio of the transmit power at each of the satellite antenna and the noise power at each mobile ground receive antenna. The decibel value of the SNR in (14) is defined as where EIRP is the effective isotropic radiated power, G T is the satellite figure of merit, K is the dB equivalent of Boltzmann's constant and B is the downlink transmission bandwidth. B. Bit Error Rate (BER) Following the analysis and derivations in [5], a closed form approximation for the probability of error for MPSK modulated transmission in additive white Gaussian noise (AWGN) is given as [5] where M is the constellation size, σ is the SNR per symbol, x is a chi-square distributed random variable and [M/4] denotes the smallest integer greater than or equal to M/4. Assuming that the mobile ground receive station uses a zero forcing (ZF) receiver, the MPSK BER can be obtained by integrating the error probability in (16) over x where P X (x) is the chi-square probability distribution function. It can be shown that a closed form expression for (18) is [6] M P SK BER = 2 max(log 2 M, 2) where U = N − M + 1 and µ k is given by V. SIMULATION RESULTS In this section we present simulation results for the capacity and BER of different configurations of MIMO satellite systems with the models present in Section III. The simulation parameters for the simulations are shown in Table I except where otherwise stated. The intersatellite spacing for systems with M > 2 receive antennas is calculated using the equation [ In Figure 2, we present the capacity (in bps/Hz) as a function of SNR for linear formation multiple satellite system using the cluster based spatial channel model. The number of satellites and receive antenna elements is varied between 1 and 8. As shown in the figure, increasing the signal to noise ratio (SNR) increases the channel capacity for all antenna sizes as expected. The capacity also increases with increase in the number of satellites and/or receive station antenna elements. For instance, the capacity difference between a 2 × 2 and 4 × 4 satellite system at SN R = 30 dB is about 10 dB. Figure 3 present the complementary capacity cumulative distribution function (CCDF) for a dual polarized satellite system and a mobile ground receive station with four antenna elements (corresponding to a 2 × 4 MIMO system) at different signal to noise ratio (SNR) levels. The CDF plots show that the variance of the channel capacity is considerably small for each SNR level. The capacity increase with SNR can also be clearly observed from Fig. 3. In figure 4, we compare the capacity for different number of satellites and receive antennas using the Loo-distribution based analytical satellite channel model for single and multi-satellite scenarios. Clearly, the channel capacity also shows an increasing trend with both increase in SNR and antenna sizes. We present a plot of the MIMO satellite channel capacity versus SNR for both single satellite multiple receive antenna ground station (SS-MRA) and multiple satellites multiple receive antenna ground station (MS-MRA) using the line of sight (LOS) approximation model in figure 5. As can be observed from the figure, the channel 2 Detailed derivations and justification can be found in [2] capacity obtained using the LOS approximation model shows a similar trend and compare well with the capacity for similar scenarios using the cluster based and analytical channel models. In figure 6 present the complementary capacity cumulative distribution function (CCDF) for a dual polarized satellite system and a mobile ground receive station with four antenna elements (corresponding to a 2 × 4 MIMO system) at different signal to noise ratio (SNR) levels using the line of sight (LOS) approximation model. Finally, we plot the bit error rate (BER) versus signal to noise ratio (SNR) for a two-satellite two receive antenna system using the three types of model described in section III. As shown in the figure, the cluster based model gives lower BER at higher SNR. However, no significant difference is observed between the BER curves for the three channel models at low SNR region. Summarily, the results presented in this section shows that the spectral efficiency of satellite systems can be significantly improved by having multiple satellites and multiple antennas at the ground station. Multiple input multiple output dual polarized satellite systems can provide increased spectral efficiency and improved bit error rate (BER) compared to the classical single satellite systems. In this paper, we analyzed the capacity and BER of different multiple satellite scenarios using different models. Simulation results showed that increasing the number of satellite and/or ground receive station antennas can significantly increase the capacity and decrease the bit error rate.
2014-08-08T22:09:31.000Z
2014-08-01T00:00:00.000
{ "year": 2014, "sha1": "d96a03634ae77973c9f2cded2398a170c8a1ae1a", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1408.2023.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d96a03634ae77973c9f2cded2398a170c8a1ae1a", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }